Font Size: a A A

Research On Driver’s Individual Emotion Recognition Under Multimodality

Posted on:2022-12-21Degree:MasterType:Thesis
Country:ChinaCandidate:S Z WangFull Text:PDF
GTID:2492306758450804Subject:Master of Engineering (Field of Vehicle Engineering)
Abstract/Summary:PDF Full Text Request
Due to the influence of emotion on driving behavior and safety,emotion recognition has gradually become a popular direction in driver state research.Because of complexity of driving environment and difference of individual drivers,driver emotion recognition has poor robustness and low accuracy.Considering that passenger car drivers are relatively fixed,the recognition technology is relatively mature,and the data of individual drivers are easy to obtain but difficult to mark,public sentiment data was first used to build a general model of multi-modal emotion recognition in this paper.Furthermore,a personalized driver emotion recognition model based on unlabeled data learning was established by further collecting individual driver emotion data.The effectiveness of the proposed method was verified through the emotion data of three drivers.Finally,a personalized driver emotion recognition system was constructed based on the above research.The specific research contents:1)By analyzing EEG,ECG physiological signal characteristics,signal processing and extraction methods of emotion-related features,Then extracting emotional data samples of EEG,ECG and facial expression from MAHNOB-HCI public emotional dataset.Furthermore,EEG,ECG physiological signal manual features and facial expression depth features were extracted from each group to construct a universal multi-modal emotion recognition model.2)EEG,ECG and facial expression signal data of 3 drivers under positive,neutral and negative emotional states were collected through simulated driving experiment and emotion guidance.Finally,9000 groups of multi-modal emotional data samples including EEG,ECG and facial expression were obtained for each of the three emotional states,and the reliability of the data was verified by cluster analysis.3)Based on the universal recognition model,A cross-domain recognition model for individual drivers was constructed to predict the pseudo tag of individual drivers’ emotional data.Then the weighted clustering method was used to mark the unlabeled emotion data of individual drivers.The results of pseudo-label and clustering label were further integrated to screen the emotion data of individual drivers with high confidence tags,which were used to train the personalized recognition model for individual drivers.The model was optimized continuously by repeating the above process.Respectively compared the optimization in each cycle stage of individual emotion recognition models of three drivers4)The terminal physiological signal acquisition equipment and facial expression acquisition camera are used to collect and display the driver’s emotional data in real time;and through the emotional recognition model deployed on the terminal,the driver’s current emotional state is output in real time.In the next step,the joint cloud deployment will realize the continuous optimization of the driver’s personalized identification system.
Keywords/Search Tags:driver emotion recognition, personalized model, multimodal fusion, domain adaptation, weighted clustering, joint annotation
PDF Full Text Request
Related items