Font Size: a A A

Multimodal Emotion Recognition Based On Fusion Of Speech And EEG Signals

Posted on:2020-08-09Degree:MasterType:Thesis
Country:ChinaCandidate:J H MaFull Text:PDF
GTID:2370330596485787Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Emotion recognition is one of the key technologies to realize machine intelligence.through the research and analysis of human emotion,it can make the machine understand human emotion and complete the relevant instructions according to human will.Among the many emotional signals,speech signal is the most direct and the most favorable expression,and EEG signal reliability and acquisition is convenient and simple,the two complement each other for emotional recognition.In this paper,a multimodal emotion recognition system is constructed by combining speech signal and EEG signals.the relationship between speech signal and EEG signals and emotion is analyzed,and the effective emotional information representing emotional difference between speech signal and EEG signals is extracted.The multi-modal emotion recognition system is constructed by using feature fusion and decision fusion,and the reliability and robustness of the multi-modal emotion recognition system are verified by comparative experiments.The research contents and innovations of this paper are as follows:(1)The structure of speech emotion recognition system is introduced in detail.According to the basic characteristics of speech signal,such as speech speed,tone,naturalness,clarity and so on,the traditional features of speechsignals are extracted,and the nonlinear features representing emotional information are analyzed and extracted from two aspects of attribute characteristics and geometric structure of speech signal.TYUT2.0 is selected as speech emotion database and support vector machine(Support Vector Machine,SVM)is used to distinguish emotion.the experimental results show that the emotion recognition system based on speech signals can effectively realize emotion classification.(2)A new emotional EEG feature is proposed and an effective subset of emotional feature is constructed.In view of the nonlinear characteristics of EEG signals,this paper uses phase space reconstruction technology to extract new emotional EEG features through the analysis of geometric structure in phase space,that is,the nonlinear geometric characteristics of EEG signals.The feature fusion method is used to fuse it with power spectral entropy and nonlinear attribute features,and an effective set of emotional features is obtained,which can represent the degree of emotional difference of EEG signals.SVM is used to classify emotions.the results show that the nonlinear geometric features extracted in this paper can effectively make up for the shortcomings of nonlinear attribute features in representing the nonlinear characteristics of EEG signals.The emotional feature set constructed by power spectrum entropy can better describe the differences between emotions.(3)A multi-modal emotion recognition system is constructed byfeature fusion technology.Aiming at the emotional features of speech signal and EEG singals extraction,this paper uses three different feature fusion methods(restricted Boltzmann machine,Locally Linear Embedding,multi dimensional scaling)to construct a multi-modal emotion recognition system.While reducing the computational complexity,the redundant information between the two features is removed.by comparing the performance of the emotion recognition system with that of a single emotion signal,the results show that the performance of the emotion recognition system is better than that of the emotion recognition system with single emotion signal.The multi-modal emotion recognition system constructed by feature fusion method has better performance of emotion recognition.(4)A quadratic decision fusion algorithm is proposed and a multimodal emotion recognition system is constructed.In view of the similarity of emotional feature extraction types between speech signal and EEG singals,a quadratic decision fusion algorithm is proposed and a multi-modal emotion recognition system is constructed in this paper.The same type of features(basic features,nonlinear attributes and geometric features)of the two emotional signals are combined and different classifiers are used for emotion recognition.DS evidence theory is used to fuse the results of nonlinear attribute and geometric feature to obtain the results of nonlinear comprehensive feature.The final multimodal emotion recognition system is obtained by combining the basic features with the nonlinear comprehensivefeature emotion recognition results by voting method.the experimental results show that compared with the single-mode emotion recognition system,The recognition rate of multi-modal emotion recognition system constructed by quadratic decision fusion algorithm is higher than that of single-mode emotion recognition system.
Keywords/Search Tags:Multimodal emotion recognition, Speech signal, EEG signal, Feature fusion, Quadratic decision fusion
PDF Full Text Request
Related items