| With the vigorous development and wide application of artificial intelligence,human-computer interaction,as an emerging correlation technology,has attracted more and more researchers’ attention.As a key research field in human-computer interaction tasks,emotion recognition plays a decisive role in the interaction experience between users and devices.In existing research,a variety of modal signals have been applied to emotion recognition tasks,such as expressions,speech,gestures,etc.However,these non-physiological modal signals have a certain degree of subjectivity,which can affect the judgment of emotion type through deliberate disguise and camouflage.In contrast,physiological modal signals such as EEG,ECG,and skin electrophysiology have greater advantages in objectivity and authenticity.This paper explores and studies the emotion recognition task of EEG signals.The main work is summarized as follows:(1)Brain data presents a topological structure in space,and the commonly used modeling methods cannot capture the spatial characteristics of brain data.Aiming at this problem,this paper proposes a single-modal EEG emotion recognition method based on spatiotemporal graph model network.This method performs feature learning on EEG signals from two levels of spatial domain and temporal domain.In the spatial domain,an adjacency matrix is first constructed to model the EEG signals,and then a graph Bert network is used to extract the EEG signals through the steps of subgraph division,node embedding,updating node features based on attention mechanism,and node clustering.The spatial domain characteristics of the signal;in the time domain,the spatial domain characteristics obtained by the EEG signals in each period are connected by a Long Short-Term Memory(LSTM)network to learn the temporal correlation of the EEG signals,thereby Complete the emotion recognition task.Experiments on SEED dataset and DEAP dataset prove that this method can complete the learning of EEG features more comprehensively and accurately,and achieve a higher emotion recognition rate.Feasibility and effectiveness of emotion recognition by electrical signals.(2)Aiming at the diverse mechanism of emotion triggering,this paper proposes an EEG emotion recognition method based on modal fusion.The method uses a multi-stage feature fusion module and a dual-mode attention mechanism to fully integrate the feature information of the EEG modality and peripheral physiological modality to achieve the effect of feature complementarity,and to help improve the final accuracy of emotion recognition based on EEG signals.Specifically:First,the multi-stage feature fusion module is used to fuse,split and assign weights to the features extracted from each single modality,so as to update the features of each modality,and embed this module multiple times in the entire network architecture.,fully fuse the information contained in the features of the two modalities;secondly,use the dual-modal attention module to weight the deep features of the two modalities to obtain the attention vector between the modalities;finally,the attention vector,The EEG feature vector and the peripheral physiological feature vector are connected and judged to complete the emotion recognition task.Experiments on the multi-modal DEAP dataset and MAHNOB-HCI dataset show that the method outperforms the emotion recognition method based on single-modal signals in the final recognition rate,and compares the existing emotion recognition methods based on dual-modal signal fusion.The identification method also has obvious advantages. |