Depression is a serious mental illness that has become a worldwide health problem.The symptoms of mild depression are not obvious and often do not attract attention,making it difficult to seek medical attention in a timely manner.If mild depression can be found as early as possible and intervention can be taken to prevent it from developing into moderate or major depression,the pain and burden of patients and their families can be reduced.Therefore,the detection of mild depression is very important for the diagnosis of early depression.However,the diagnosis of depression mainly relies on the doctor’s consultation and the patient’s questionnaire report,which is highly subjective and easy to cause misdiagnosis.In recent years,many researchers have explored the use of objective physiological signals to identify and detect depression.Electroencephalography(EEG)has the characteristics of non-invasive and high temporal resolution,while Eye Movement(EM)is a relatively intuitive external reflection of human inner activities,so the two physiological signals are often used in the study of depression.In current research,multimodal data fusion has become a hot topic.A more accurate and robust depression recognition model can be constructed by integrating complementary information from different modalities.In this paper,the data of 20 subjects with mild depression and 20 normal controls were collected synchronously under the experimental paradigm of free browsing emotional faces to establish an effective and objectively quantified recognition model of mild depression for auxiliary diagnosis.The main work and innovation are as follows:(1)Aiming at the problems that different modalities have shared and complementary information,and the traditional linear fusion way would lose part of the information,this paper proposes the study on mild depression recognition based on graph fusion.Graph fusion uses a nonlinear approach based on message passing theory,capable of capturing shared and complementary information from different data sources.In this paper,the different bands of EEG are regarded as separate modalities,and the graph of each modal is constructed according to the features of each band of EEG and the features of EM.After that,the nonlinear graph fusion method is used to perform multimodal fusion.The weak similarity disappears in the fusion process,helping to reduce noise,and the strong similarity that exists in one or more modal graphs is added to other modal graph.This nonlinearity can fully utilize the local structure of the graph to integrate shared and complementary information.Meanwhile,the graph fusion is based on sample network,which can obtain useful information even from a small number of samples,and is robust to noise and data heterogeneity.The experimental results show that graph fusion can improve the recognition accuracy of mild depression,reaching the highest classification accuracy of 89.50% and 85.75% under the Neu_Block and Emo_Block data,respectively,which increased by 5.23% and 4.95%compared with the single-modal maximum accuracy.(2)In response to the problems of limited coverage information and unstable classification results of a single physiological signal,this paper proposes a contentbased multiple evidence fusion(CBMEF)method based on the framework of DS evidence theory,which fuses EEG signals and EM data at the decision-making level without changing the original modal feature structure.The advantage of decision layer fusion is that information from multiple modalities can be integrated while avoiding their mutual interference.The proposed CBMEF method mainly includes two modules,the classification performance matrix module and the dual-weight fusion module.The classification performance matrices of different modalities are estimated by Bayesian rule based on confusion matrix and Mahalanobis distance,and the matrices were used to correct the classification results.Then,the relative conflict degree of each mode is calculated based on the exponential function and the weighted Jousselme distance,and the above modes are given different weights in the result fusion stage according to the conflict degree.Then the relative conflict degree of each modality is calculated based on the exponential function and the weighted Jousselme distance,and different weights are assigned to the above modalities at the decision fusion layer according to this conflict degree.The experimental results show that the proposed method achieved the highest accuracy of 91.12% and 86.72% under the Neu_Block and Emo_Block data,respectively,which verifies the potential of CBMEF in detecting mild depression and casts a new light on the applied study of multimodal mild depression recognition based on EEG and EM.In conclusion,the multimodal fusion methods based on EEG and EM proposed in this paper have demonstrated good performance in the recognition of mild depression,and provide reference value for the application research of fusing multiple physiological signals to identify mild depression. |