| Nowadays,the research of brain science has received unprecedented attention.The human brain is engaged in emotion and other high-level activities.Therefore,it is of great scientific value to explore emotion recognition through the electroencephalography(EEG)signals of human.However,for a long time,emotion recognition studies were significant because of individual differences in EEG signals from different subjects.EEGs vary widely from person to person,making it difficult to recognize emotions across subjects.Therefore,how to improve the cross-subject emotion recognition has been one of the focuses of many studies including this work.In recent years,more and more researchers have begun to explore the relationship between physiological signals and emotion recognition,especially through EEG.Emotion recognition based on EEG signals has two problems that can't be ignored.First,because the features generalizability of EEG signals between different subjects is poor,it is difficult to accurately identify emotions across subjects from the emotion model trained by EEG features of some subjects.An important reason for this problem is that some EEG features are redundant or irrelevant,so it is necessary to select effective features from the extracted EEG features to train the model to improve the generalization ability of the model.Second,many previous works have extracted only a few EEG features,or used a common machine learning method to explore cross-subject emotion recognition.These two methods have two disadvantages.One is that they cannot select more effective features,and the other is that they neglect to explore and compare common machine learning methods.Therefore,in order to select effective features and to address the shortcomings of previous relevant work.In this study,10 linear and nonlinear EEG features were first extracted and combined into high-dimensional features,which were then effectively selected for cross-subject emotion classification.In this exploration process,this work has proposed a cross-subject emotion recognition method--ST-SBSSVM that integrates Significance Test,Sequence Backward Selection and Support Vector Machine.The effectiveness of the proposed method has been examined on DEAP(a Database for Emotion Analysis Using Physiological Signals)and SEED(the SJTU(Shanghai Jiao Tong University)Emotional EEG Dataset).In the condition that is cross-subject emotion recognition based on high-dimensional EEG features.(1)The ST-SBSSVM average improved the accuracy of cross-subject emotion recognition by 12.4% on the DEAP and 26.5% on the SEED when compared with common machine learning methods(For example,SVM,KNN,PCA-SVM,PCA-KNN,random forest(RF),sequence backward selection(SBS)).In addition,the ST-SBSSVM was comparable to the deep learning methods used in similar works.(2)The recognition accuracy obtained using ST-SBSSVM was as high as that obtained using sequential backward selection(SBS)on the DEAP dataset.Using the STSBSSVM,~97% of the program runtime was eliminated when compared with the SBS on the DEAP dataset.(3)However,on the SEED dataset,the recognition accuracy increased by ~6% using ST-SBSSVM from that using the SBS.Using the ST-SBSSVM,~91% of the program runtime was eliminated when compared with the SBS on the SEED dataset.(4)Compared with recent similar works,the method developed in this study for emotion recognition across all subjects was found to be advanced and effective.The average recognition accuracy of the valence in this work was 72%(DEAP)and 89%(SEED),respectively. |