Font Size: a A A

Multimodal Emotion Analysis Based On EEG And Facial Expression Images Of Deaf Students

Posted on:2022-11-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y YangFull Text:PDF
GTID:2480306743972739Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Emotion is a complex physiological and psychological state that affects human cognition,behavior and communication,and emotion recognition has been playing an important role in many fields such as human-computer interaction and mental health monitoring.Emotion recognition methods based on physiological and nonphysiol ogical signals are widely used.The lack of auditory perceptual channels in quiethearing students makes it difficult for them to express their emotions through language,and they often have deviations from normal emotions.Thus,this paper proposes a feature fusion network based on EEG signals and facial expressions to identify their emotional states.The main research of this paper is as follows.(1)Referring to the SEED dataset experimental method,a multimodal emotion experiment paradigm based on quiet listening students was designed and implemented using the same emotional stimulus clips.EEG data and facial expression images of 15 quiet listening student subjects were collected while watching three types of emotion(positive emotion,neutral emotion,and negative emotion)movie clips,and according to the subjects' SAM(Self-assessment manikin)after each movie clip was played assessment manikin)system's emotion annotation results after each movie clip was played,and the collected EEG signals were screened to ensure the reliability of the experiment.(2)To study the multi-feature fusion network based on deep belief network,differential entropy EEG features and facial expression images are used for emotion classification using feature fusion method.The experimental results show that the multimodal emotion recognition method outperforms single-modal recognition because it can provide more comprehensive and complementary emotion representation information.(3)A network weighting analysis method was proposed to explore the main features of EEG signals and facial expression images of silent listening students.12 major EEG channels and 30 facial key point features were selected to identify facial emotion representation regions that are important for understanding facial microexpression changes.The experimental analysis found that the EEG emotional changes were mainly concentrated in the high frequency band(Gamma,Beta band),and the emotional changes were mainly concentrated in the temporal and frontal regions,and the micro changes of facial expressions were closely related to the corners of the eyes,eyebrows and mouth.(4)A visual interface for quiet listening student emotion labeling was designed and implemented to improve the reliability of emotion labeling,and a framework for continuous emotion labeling based on rocker operation of stimulus fragment validity dimension was designed by using the SAM system as an emotion labeling benchmark.The continuous emotion labeling information of quiet-listening student subjects and normal subjects was used to hear the bias of loss on quiet-listening students'understanding of emotion.The experimental results show that quiet-hearing students reflect positive emotions strongly while negative emotions are produced weaker than normal subjects,while the frontal area capacity increases with negative emotions,while the increase of positive emotions causes the energy in the temporal area to rise.
Keywords/Search Tags:EEG, Facial expression, Emotion recognition, Feature fusion, Deep learning
PDF Full Text Request
Related items