| With the rapid development of Artificial intelligence(AI),intelligent Human-Computer Interaction(HCI)equipment has been gradually applied to people’s daily lives.Although the HCI equipment can satisfied parts of human’s willingness,they hardly interact with human emotionally,that is to say,they are poor at adjusting the interaction mode according to user’s psychological feelings.Hence,the function and application of HCI equipment are greatly restricted.As one of the main methods of information exchange,emotion plays an important role in people’s daily communication.Developing the HCI equipment which can autonomously perceive person’s emotion has become an important research direction in the fields of AI and HCI.At present,the main data sources of affective computing can be roughly categorized as three types: facial expressions,speech signals and physiological signals.While,due to the properties of the real-time difference and the difficult camouflage,the electrocardiogram(EEG)signals,which is one of physiological signals,has attract more and more researchers attention.In the view of recognition models,EEG-based emotion classification methods can be roughly categorized as traditional machine learning and deep learning,where traditional machine learning methods mainly include Support Vector Machines(SVM),K-nearest Neighbor Classifiers(KNN),Logistic Regression(LR)and so on.Compared to traditional machine learning,deep learning methods perform better performance on feature representation,complex task modeling,and abstract cognitive recognition.Therefore,deep learning can simulate human’s emotion cognition and thinking modes,which ensure that deep learning could deal with the more complexly emotion-related EEG signals.Because of the difference caused by data collection process,living environment,and physical and mental state,the emotion-related EEG signal could be a great difference between different subjects,which could be one of reasons that may highly reduce the accuracy of the deep learning model and limit the application scenarios of the emotion recognition.Therefore,the EEG-based emotion recognition and corresponding transfer learning is researched in this thesis.First of all,deep learning methods are used to classify the emotion-related EEG signals.Then,aimed at the joint features of multi-subject’s emotional EEG signals,the Transfer Component Analysis(TCA)is employed.The specific works are as follows:(1)Convolutional Neural Network(CNN)is employed to achieve the emotion recognition task based on EEG databases.Firstly,the frequency domain and spatial location information features,such as differential entropy(DE),power spectral density(PSD),hemisphere asymmetry,are extracted after preprocessing the original emotional 62-channel EEG signals.Then,CNN is used to recognize the three different emotion modes,i.e.,positive,neural and negative,based on the above extracted features.Finally,the SJTU Emotion EEG Dataset(SEED)database is employed to verify the extracted features combined with CNN,in which the comprehensive average recognition accuracy reach to 88.01%.Compared with the SVM,Deep Neural Network(DNN)and Extreme Learning Machine(ELM),the average accuracy of the proposed method is increased by 14.56%,13.53%,15.72%,respectively.The effectiveness of the proposed method is demonstrated by the outstanding experimental results.(2)Mixed-DE feature which is extracted from emotional EEG is used to carry out the transfer research.Firstly,DE features are extracted from the emotional EEG data of 15 subjects in SEED database respectively.Then,the TCA algorithm is used to transfer the features and reduce the dimension.Next,after utilize the TCA algorithm,select the features of14 of all subjects as training set,and the features of another subject as test set.Finally,the deep learning method is employed to recognize the different emotion state.In order to ensure the accuracy of the experimental results,the experiment is repetitively carried out by 15-hold cross validation,and the best optimal accuracy is 58.49%.Compared with the original DE features(52.26%),the optimal average accuracy is increased by 6.23%.The effectiveness of the proposed method is illustrated by the prominent experimental results.(3)The method combining different features is proposed to make a transfer learning based on the mixed emotion-related EEG signals of multi-subjects.Firstly,the DE features,the spatial features,and their jointed feature(DE-Spatial)extracted from the 14 of all subjects are combined to feed the deep learning model.Then,the same features extracted from another one subject is used to test the proposed model,where the average classification rate are52.26%,46.77% and 53.88% respectively.Finally,TCA is used to transfer the above mixed features and reduce their dimension,and the corresponding optimal average recognition rate reaches to 85.73%.The remarkable classification accuracy demonstrates that the transferred joint features have capability to reduce the difference between different feature domains,and improve the performance and applicability of the proposed model. |