Multi-modal Information Synergy Analysis Based On Anxiety And Depression State Identification | | Posted on:2024-04-14 | Degree:Master | Type:Thesis | | Country:China | Candidate:Y T Huang | Full Text:PDF | | GTID:2544307082466964 | Subject:Applied Psychology | | Abstract/Summary: | PDF Full Text Request | | BACKGROUND: Both anxiety and depression are the most common psychiatric disorders.The detection rate of anxiety disorders and depression in China has shown an increasing trend in recent years.Many patients cannot receive timely diagnosis and treatment due to geographical remoteness,economic or physician shortage.Therefore early screening of depressive and anxiety states is of great clinical importance.The identification of mental states has attracted extensive attention from the artificial intelligence and psychology communities,and mining emotions in multi-modal data has become a popular research topic in natural language processing,data mining and other fields.The development of disease surveillance for the anxiety and depression disorders in the primary care system is of great significance for exploring early and rapid diagnosis and comprehensive intervention models for the disorders.METHODS: In this study,the scale and diagnostic results,video information of patients with different degrees of anxiety and depression,and data from healthy controls were collected in a general hospital outpatient clinic using the Hamilton Anxiety Inventory and Hamilton Depression Inventory.Single frames of facial images were input into Deep Neural Network to extract face images in single frames and identify the degree of emotional arousal and validity of participants’ facial expressions.723 anxiety videos and 741 depression videos were included in this study.The transcribed texts from videos were extracted using Snow NLP for the emotional polarity of each response of the participant and the frequency information of words were counted using jieba splitting.16,102 anxiety texts and9,662 depression texts were included.Differences between mild-moderate anxiety to severe anxiety,and mild-moderate depression to severe depression and healthy control were compared respectively.The detected multi-modal data were included: arousal and validity features of facial emotions,collaborative analysis of emotional values in texts.And prediction of depressive status of individuals using the machine learning model Random Forest.RESULTS: In the expression data,we found significant differences in arousal and validity of facial expressions in patients with different symptoms of anxiety disorders compared to healthy controls(P < 0.05).However,there were no significant differences in the arousal of expressions and significant differences in the validity of expressions between levels of depression(P < 0.05).By visualizing statistical differences on text and exposing word frequency,the high frequency words that emerged between groups of patients with anxiety and depression during conversation with the physician were shown.And predicting participants’ mental status using random forest achieved good results with the mean accuracy ACC = 69.23% and the recall REC = 75.95%.CONCLUSION: In this study,by analyzing uni-modal data between participants with different levels of anxiety and depression and healthy controls,we found significant differences with image expressions and video transcribed text between groups of participants with different levels of anxiety and depression.The feasibility of an anxiety and depression identification method using a rapid contactless real-time measure of participants’ mental status is explored.We hope that collaborative comparative analysis of emotional interactions between texts and images will provide evidence of external behavioral analysis for early diagnosis of different levels of anxiety and depression,and provide new and more detailed ways for subsequent intervention treatment of patients. | | Keywords/Search Tags: | anxiety, depression, expression, transcribed text | PDF Full Text Request | Related items |
| |
|