| Emotions have an impact that cannot be ignored on human physical and mental health,so emotion recognition has always been a hot research direction in the fields of neuroscience,psychology,and medicine.In recent years,with the rapid development of the Internet of Things,human-computer interaction and wireless communication technologies,the demand for emotion perception in the industry and research community has gradually increased.Although wearable device emotion recognition technology has matured,its use is limited by its limitations of strong intrusion,expensive equipment,high maintenance costs,and high system complexity.Perception tasks have received increasing attention.Millimeter-wave radar technology has shown the advantages of high sensitivity to millimeter-level displacement detection and strong anti-noise.A large number of studies have shown that millimeter-wave radar performs well in the field of vital sign monitoring,which also provides us with emotional recognition through millimeter-wave radar technology.Tasks provide ideas.In addition to physiological signs,facial expressions are also important reference indicators for judging emotions.Therefore,using multi-dimensional data to improve the recognition accuracy of emotions is also the exploration direction of this paper.The main research contents are as follows:(1)Aiming at the limitations of traditional emotion recognition tasks based on physiological information,a non-contact emotion recognition method ER-MIK is proposed.Firstly,the mixed signals of respiration and heartbeat of subjects under different emotions were monitored by millimeter wave radar.Then,moving target detection(MTD)and Butterworth filter preprocessing are performed,and t-distributed random neighborhood embedding(t-SNE)is used for dimensionality reduction and feature extraction.Finally,the nearest neighbor node model(KNN)is constructed and trained by using the feature data set.After verification by the test set,the model can achieve an average recognition accuracy of 79.25 % under specific human conditions and68.5 % under non-specific human conditions.(2)Aiming at the limitations of traditional machine learning methods as classification models,this paper improves the above methods and proposes an emotion recognition method based on deep learning.The collected raw data is pre-whitened,phase unwrapping,phase difference and other operations to separate the signal into separate breathing and heartbeat signals.The obtained signal is used as the input of the proposed deep learning model combining one-dimensional convolutional neural network(1D-CNN)and bidirectional long short-term memory network(Bi-LSTM)to obtain the classification results.After verification,the average recognition accuracy of the model under the condition of specific people is 82.75 %,and the average recognition accuracy of 71.62 %is achieved under the condition of non-specific people.(3)In the past research,emotion recognition based on facial expressions has been very mature.As an intuitive expression of emotional expression,facial expression is also one of the important references for emotional judgment.Therefore,on the basis of the above research,the combination of facial expression and physiological sign information for emotion recognition is the main content of this chapter.Specifically,the signal is first processed and separated by moving target indication(MTI)and variational mode decomposition(VMD)algorithms to obtain respiratory and heartbeat signals.The collected face video signals are obtained by face detection,image cropping and key frame selection.According to their respective characteristics,a deep learning model stacked by convolutional neural network(CNN)and gate recurrent unit(GRU)was designed to realize the recognition and classification of emotions.In the subsequent experiments,it was also verified that the average recognition accuracy of the four emotions in the specific person classifier reached 85.5 %,while the recognition accuracy of the non-specific person classifier reached 74.25 %. |