Medical image can reflect the internal structure image of human body,which is a common auxiliary method in clinical diagnosis and treatment.In recent years,medical imaging technology has been widely used in clinical diagnosis,treatment,post-diagnosis review and evaluation.However,this diagnosis and treatment method relies heavily on the subjective judgment and clinical experience of doctors,and often requires doctors to observe and judge for a long time in the process of diagnosis,which greatly reduces the efficiency of diagnosis and treatment.In the field of medical image,two or more images can be fused into one image with complementary information,which is conducive to assist doctors in locating disease types,lesions and conditions.Most of the existing methods focus on two modal fusion,and diseases requiring three modal fusion are very common.Three modal fusion can obtain more information of the same location,but relevant research has not been paid attention to,and how to select and design the appropriate algorithm to achieve multi-modal fusion is also a big challenge.The features of medical image are complex,and the expression of image features is different in different modals.How to extract the features of medical images effectively is the key factor affecting the classification performance.At the same time,medical image data set encounters the problem of category imbalance,leading to poor recognition performance of a few categories.Among medical image data,brain images are also more difficult for doctors to diagnose.Therefore,this thesis studies the multi-modal fusion of brain images and the auxiliary diagnosis and classification of brain diseases:(1)In this thesis,a high-precision multi-modal medical image fusion network is proposed,which can be used for three modal and two modal brain medical image fusion.According to the different information features of anatomical image and functional image in medical image,this thesis designs a Global Texture Module(GTM)and a Local Detail Module(LDM)for feature extraction.The fusion strategy with fourier transform fully combines the advantages of spatial domain and frequency domain to retain more complete texture details and global contour information.At the same time,this thesis proposes a multi-attention mechanism to extract more effective depth features and more accurate location information.The experimental results show that the method is effective in both subjective vision and objective index evaluation,and can greatly improve the efficiency of diagnosis and treatment of focal location.(2)In order to solve the problem of insufficient accuracy of brain image auxiliary diagnosis and classification,this thesis designs the Small Weight Agritecture Net(SWA-Net),which can accurately and efficiently perform the auxiliary classification and diagnosis of brain diseases and assist doctors to judge patients’ condition.In this network,a convolutional attention module is introduced to extract the feature information for brain image features,and a new residual structure is designed to fuse the information between the feature channels and make them more closely related,which can effectively extract the features of the channels in the image to obtain higher accuracy.At the same time,the data set is enhanced to avoid overfitting in the process of model training.Data enhancement was performed on the Alzheimer Disease(AD)dataset and the image classification accuracy of the model was verified.The experimental results show that the classification method in this thesis can effectively determine whether the patients suffer from related brain diseases. |