| Emotion is a state that integrates people’s feelings,thoughts and behaviors.Emotion detection has significant practical implications.Emotions can be detected in different ways,such as facial expressions,voice and physiological data.Among them,physiological signals are difficult to forge and have obvious advantages in measuring spontaneous mental activity in different emotional states.Human physiological signals can be measured by different imaging modalities,such as functional magnetic resonance imaging,magnetoencephalography,functional near-infrared spectroscopy,and electroencephalography.Electroencephalogram has been widely used in the field of emotion recognition due to its high temporal resolution,noninvasive,low cost,and portability.In recent years,emotion recognition research based on machine learning and deep learning methods has attracted increasing attention from the academic community.Traditional emotion recognition algorithms typically use a "feature extractor-classifier" structure,where after manual feature extraction,support vector machines,K-nearest neighbors,Bayesian networks,or other algorithms are used for classification.Due to their reliance on manual feature extraction,these algorithms are subject to researcher’s experience and constraints,and are sensitive to noise,making it difficult to ensure its robustness.Existing studies have shown that deep learning methods is efficient on emotional state recognition based on EEG,but at the same time,the performance of the algorithm needs to be further improved.This thesis mainly uses deep learning methods to explore the subject-dependent trial confusion and crosstrial paradigms of emotional state recognition,and the specific research content is as follows.(1)For the subject-dependent trial confusion paradigm,the existing recognition algorithms consider the physiological characteristics of the emotional task,such as the asymmetry of the two cerebral hemispheres(event-related synchronization / desynchronization),but do not consider the performance of this asymmetry at different time scales.In the view of this situation,this thesis extracts the asymmetry features of the two cerebral hemispheres at multiple time scales,and fuses the spatial topology information of the electrode distribution to construct a multi-branch deep learning algorithm including a spatial feature extractor and a temporal feature extractor.The algorithm respectively decodes EEG signals from the perspective of time domain and space domain,and several different branches are used in the time domain to extract EEG feature information at various time scales.Finally,the information of multiple branches is fused to output the final classification result.This method is verified on two datasets,i.e.DEAP and DREAMER.The experimental results show that the algorithm has better recognition accuracy and lower standard deviation than the existing emotional state recognition algorithms.(2)Compared with the trial confusion paradigm,the cross-trial paradigm is closer to the actual application scenario.However,due to its higher research difficulty,there are relatively few published research results.In this thesis,a novel graph structure organization method is proposed,which organizes all the EEG data of a single subject in the dataset into a whole graph.The graph neural network algorithm is used to extract features and classify nodes in the graph.In this study,the structure of the deep learning model is determined by modeling multiple sampling using the Markov random process idea,analyzing its properties and determining the most appropriate number of sampling layers.Based on the DEAP dataset,the proposed algorithm was validated and the rationality of the sampling layers was analyzed.The experimental results demonstrated that the proposed graph data structure and corresponding deep graph neural network algorithm can effectively improve the accuracy of emotion recognition tasks. |