| With the development of computer technology and sensor technology,humancomputer interaction(HCI)has made great progress.Emotion recognition,as an important part of HCI,has become a research hotspot.Electroencephalogram(EEG),as a kind of physiological signal with high temporal resolution and can more directly reflect the activity of brain regions under different emotional states,is widely used for emotion recognition.The existing feature extraction methods and classification networks of emotion recognition technology still have some problems and challenges.For example,the feature information characterization ability of single domain is weak,which ignores the complementary information among features in different domains.And the deepening of classification network layers makes the training task heavy and time-consuming.For the problem that the hearing-impaired individuals may have psychological personality effects due to difficulties in emotional interaction,it is crucial to conduct research on EEG-based emotion recognition for hearing-impaired people.To address the above issues,the main research of this paper is as follows:(1)Referring to the SEED public dataset experimental paradigm,30 emotional stimulus segments with different emotions were selected.The EEG experimental paradigm for hearing impaired students was designed and implemented.The EEG signals of 15 hearing impaired students were collected when they watched the stimulus clips.The hearing-impaired EEG datasets(HIED)with six emotions(happiness,anger,encouragement,sadness,neutral,and fear)were initially constructed.(2)An FPN-SVM emotion classification model based on biharmonic spline interpolation is proposed to solve the problem of weak representation of feature information in a single domain.The DE features of EEG signals are acquired to capture the frequency domain information,and the spatial correlation information between electrode channels is captured by biharmonic spline interpolation.The semantic information of low-level and high-level feature maps are integrated by FPN to reduce the semantic gap between feature maps.Finally,a linear kernel SVM classifier is used to complete emotion recognition.The subject-dependent experiments of 2-class classification performed on the DEAP dataset achieved 94.29% on valence and 96.97%on arousal,and the subject-independent experiments achieved 80.38% on valence and82.33% on arousal,respectively.Comparison with the results of existing studies showed that the FPN-SVM classification model achieved a better classification effect and proved the classification effectiveness of the model.(3)The S2D-RFPN emotion classification model based on time-frequency-spatial features is proposed to solve the problem of insufficient feature extraction ability and long time consuming of classical model.The preprocessing signal matrix(PSM),symmetric difference matrix(SDM),symmetric quotient matrix(SQM)and differential entropy matrix(DEM)are constructed and fused based on sampling points to obtain complementary information of different domain.The space-to-depth layer(S2D)is used to reduce model training cost and save model training time.RFPN performs information fusion among feature nodes and nodes by shortcut,which can solve the limitation of the FPN-SVM emotion classification model with one-way computation.The 3-class classification experiments applied to the SEED dataset achieved 96.84%,and the 6-class classification experiments applied to the HIED dataset achieved 87.87%.Comparison with existing studies proves the viability and effectiveness of the proposed feature fusion strategy and model. |