| Emotion recognition has been applied in many fields such as human-computer interaction,rehabilitation medicine,and neuroscience.However,most of the emotion recognition researches are aimed at healthy controls or patients with depression.Due to the loss of hearing,a key channel in the process of emotional communication,hearingimpaired students can only perceive changes of interactive objects through the senses of vision,touch,and so on,which makes them often have a deviation in the perception and expression of emotions.In this paper,the emotion recognition research of hearingimpaired students is helpful to understand the emotion analysis mode of hearingimpaired students and explore the influence of hearing loss on human emotion perception.Compared with single-modality emotion recognition,multimodal fusion can provide more comprehensive feature information,which is helpful to improve the effect of emotion recognition.Therefore,this paper proposes an emotion recognition framework based on fusing EEG topographic maps and facial expressions,with specific contents as follows:(1)An experimental paradigm of multimodal emotion of hearing-impaired students based on video stimulation was designed and implemented,and a multimodal emotion dataset of hearing-impaired students was constructed.The EEG signals and facial expression images of 15 hearing-impaired students were collected while watching four kinds of emotional movie clips.In the selection of movie clips,the three types of happiness,calmness,and sadness are consistent with the SEED dataset.The movie clips of fear were selected by 40 psychology graduate students through scoring.(2)The emotion recognition method based on facial expression features was studied,and the CBAM_Res Net34 network,which combines the convolutional block attention mechanism and residual convolutional neural network,was proposed for feature extraction and classification.The experimental results show that the proposed deep learning method can better perceive the subtle changes in facial expression images and improve the accuracy of emotion recognition.(3)An emotion recognition method based on EEG topographic maps was proposed,which transformed one-dimensional Differential Entropy features into two-dimensional EEG topographic maps to represent the emotional changes of hearing-impaired students.CBAM_Res Net34 network was used to extract the deep representational features related to emotional changes in EEG topographic maps and complete emotional classification.Heat map analysis showed that the main areas affecting the emotional changes of hearing-impaired students were concentrated in the frontal lobe,temporal lobe and occipital lobe of the brain.(4)A multimodal emotion recognition framework based on the fusion of EEG topographic maps and facial expressions was presented.In the channel dimension,the feature fusion of two-dimensional EEG topographic maps and facial expressions was carried out to obtain multimodal fusion features,and the emotion classification was finished through the deep learning method.The experimental results show that the effect of emotion recognition can be effectively improved through multimodal fusion.After that,the complementary characteristics of the two modalities were analyzed by constructing confusion matrixes,and it was found that the EEG topographic maps had more advantages in identifying fear emotions,and facial expressions had a better effect in classifying happiness and sadness samples. |