Emotions affect all aspects of human life,and humans hope that computers can have the ability to recognize human emotions,so as to understand and assist human work.At present,emotion recognition techniques are widely studied,including facial expression,posture recognition,physiological signals and other methods.The first two methods are somewhat subjective because subjects may conceal their emotions.Emotion recognition based on physiological signals is more objective,among which EEG has been studied because it can directly reflect human emotions,while peripheral physiological signals can assist EEG in emotion recognition to improve the performance of emotion recognition.However,due to the large number of electrodes used to collect emotional EEG data,there are problems such as lack of portability and information redundancy.And the current performance on cross-subject EEG emotional recognition tasks are all low,and it’s difficult to accurately respond to human emotional states using only single-modal EEG signals.In response to the above problems,this paper has conducted research related to EEG channel selection and multimodal emotion recognition,and the main works are as follows,respectively.(1)Aiming at the problems of high device complexity,information redundancy and large amount of computation,an inference-statistics based emotional EEG channel selection algorithm was proposed.The proposed emotional EEG channel selection module is embedded into the EEG emotional recognition model,and dynamic EEG channel selection is performed while the model reasoning performs the emotional task,i.e.,the parameters of the EEG channel selection module and the EEG emotional recognition network are jointly trained in a data-driven manner.The frequency of each EEG channel being selected is counted and ranked,and different number of channels are selected for the emotion recognition task respectively.Experimental results show that when the number of EEG channels is reduced from 32 to 15 leads,the performance of emotion recognition decreases only by less than 1%.(2)A bi-strategy training algorithm for uneven distribution of EEG features is proposed to address the problem of poor generalization performance of the cross-subject EEG recognition model due to the uneven distribution of EEG sample features caused by the noisy and non-smooth EEG signals and the differences in emotional expressions of different subjects.Two training strategies,Boost and Gradient Descent,are combined to alternately update the EEG emotion recognition model to adjust the feature distribution to extract more robust emotion features,with Boost updating the sample weights of the input EEG data set and Gradient Descent updating the network parameters during model inference.The visualization and emotion recognition performance experimental results show that the method effectively adjusts the distribution of emotional EEG sample features to improve cross-subject EEG emotion recognition performance,with accuracy rates of 71.25%,71.48%and 71.80% in the three dimensions of valence,arousal and dominance,respectively(3)A multimodal emotion recognition method that fuses EEG and peripheral physiological signals is proposed to address the problem that single-modal EEG signals are difficult to accurately respond to human emotional states.The method is an end-to-end observationlevel fusion method,which firstly builds a multimodal emotion recognition model,extracts shallow emotion feature representations to EEG,EMG and GSR through a multimodal scaling layer architecture,and stacks them into a three-dimensional multimodal class spectral feature tensor;followed by three convolutional layers for feature transformation and modal fusion for subsequent emotion classification;and finally builds a general framework for multimodal emotion recognition.The multimodal emotion recognition model is trained by a bi-strategy training algorithm combining the boosting algorithm and the gradient descent method.It is verified that the performance of emotion recognition when fusing physiological signals of three modalities is better than that of unimodal and two-two modal combinations.In summary,this paper uses deep learning methods mainly for emotional EEG channel selection.EEG emotion recognition methods and multimodal fusion emotion recognition methods,all of which show superior performance on the task of cross-subject emotion recognition.And they are important for the application of physiological signal emotion recognition on the ground. |