Font Size: a A A

The Research Of Emotion Recognition From Multichannel EEG Via Deep Forest

Posted on:2022-07-04Degree:MasterType:Thesis
Country:ChinaCandidate:M Y ChenFull Text:PDF
GTID:2480306557480994Subject:Biomedical instruments
Abstract/Summary:PDF Full Text Request
With the increasing demand for human-computer interaction(HCI),whether the machine can correctly analyze the user's emotional state has become the key to interactive experience.Therefore,automatic emotion recognition is a challenging task that has attracted numerous attention.For many years,the research direction has mainly focused on emotion recognition based on behavioral response and emotion recognition based on physiological signals.Among them,electroencephalography(EEG)signals have an irreplaceable position in the field of emotion recognition due to their high time resolution and can provide direct and high-precision recognition methods.Recently,deep neural networks(DNNs)have been applied to EEG-based emotion recognition tasks,and have achieved better performance than traditional algorithms.DNN-based methods can be divided into feature decoder-based and data-driven.On the one hand,it is challenging for the first type of DNN-based methods to extract more discriminative features that might ignore the useful information across channels or related to time.On the other hand,although DNN-based methods can be end-to-end,they still have the disadvantages of too many hyperparameters and lots of training data.To overcome these shortcomings,this paper proposes a method for multi-channel EEGbased emotion recognition using deep forest,which can fully mine the spatiotemporal information of EEG signals and improve the recognition accuracy of emotion recognition:First,the effect of baseline signal is considered to preprocess the raw artifacteliminated EEG signal with baseline removal.Secondly,the 2D frame sequence is constructed by mapping the equivalent matrix according to the spatial position of EEG channels,so that it has cross-channel spatial and temporal information.Finally,2D frame sequences are input into the classification model that can mine the spatial and temporal information of EEG signals to classify EEG emotions.In addition to less model parameters and less training data,the proposed method also does not need artificial feature extraction step,which is a data-driven method.Moreover,the classification model is not sensitive to the setting of hyperparameters,which greatly reduce the complexity of emotion recognition.To verify the feasibility of the proposed model,experiments were conducted on two public DEAP and DREAMER databases,and the performance was compared with the state-of-art methods.On the DEAP database,the average accuracies reach to 97.69% and97.53% for valence and arousal,respectively;on the DREAMER database,the average accuracies reach to 89.03%,90.41%,and 89.89% for valence,arousal and dominance,respectively.The recognition accuracy is improved compared with the state-of-art methods.This study can promote the development of EEG emotion recognition technology based on deep learning.
Keywords/Search Tags:Human-computer interaction(HCI), emotion recognition, Deep neural networks(DNNs), multi-channel EEG, deep forest, spatio-temporal information
PDF Full Text Request
Related items