Font Size: a A A

Study On Multi-modal Brain Image Fusion Algorithm Based On Deep Learning

Posted on:2024-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:H LinFull Text:PDF
GTID:2544307097469324Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Medical imaging technology can provide a good understanding of human tissue structure.Different medical imaging modes have greatly improved clinical diagnosis decisionmaking because they are widely used in disease diagnosis.Magnetic resonance imaging(MRI)provides high-resolution soft tissue information,and computerized tomography(CT)provides high-quality bone density information.Positron emission tomography(PET)and single photon emission computed tomography(SPECT)can provide blood flow,metabolism,and even the activity information of some cancer cells.In some complex diagnostic scenarios,experts usually need to see the structure and organization information of different depths of the human body.Constructing a high-quality multi-modal fusion image with complementary information and rich details of the source image can provide more information reference for experts,thus making the clinical diagnosis more accurate and the diagnosis process more convenient.The purpose of this paper is to study the multi-modal brain image fusion algorithm based on deep learning.By designing an efficient and high-quality self-supervised fusion model,the complementary information of different modal images can be effectively integrated to generate clear fusion images,so as to better solve the disadvantages of poor adaptability,fuzzy or missing details,and cumbersome models that are easy to appear in most previous fusion methods.The research contents are as follows:1.Aiming at the problems of low clarity of fusion results,lack of fidelity of local information retention and restoration,and information loss in previous deep learning fusion methods,an end-to-end fusion model based on Patch-GAN is designed for brain image fusion of various modalities(U-Patch GAN).The model uses U-Net and Patch-GAN to complete the dual confrontation fusion mechanism.Patch-GAN can effectively promote the network ’s attention to high-frequency information and enhance the fusion details.The designed new adversarial loss and feature loss(feature matching loss and VGG-16 perception loss)based on F-norm can effectively promote network convergence,strengthen the interaction of feature information and promote the integration of details.By introducing spectral normalization,the network satisfies Lipschitz continuity,thereby promoting training stability.The experimental analysis shows that the fusion results of each mode of the model are excellent for soft tissue information texture and functional chroma information,and have good information visual effect.2.In order to enhance the preservation of MRI bone mineral information by U-Patch GAN model(for PET-MRI modal image fusion)and solve the problems of unstable dual confrontation mechanism and complex model,this paper proposes an MRI-PET fusion model based on attention mechanism(Res-Attention Net).By introducing the attention module CBAM(Convolutional Block Attention Module),attention gate AG(Attention Gate),multiscale pooling module ASPP(Atrous Spatial Pyramid Pooling)and residual structure,the salient features are dynamically highlighted,the feature information extraction is enhanced,and the computational overhead is reduced.The fusion strategy of separating the dense bone information and the fusion area effectively avoids the problem of missing MRI dense bone information.The experimental results show that the model has good performance in depicting fusion details and maintaining high fusion efficiency.3.Aiming at the poor performance of Res-Attention Net fusion model on the brightness information and detail texture of soft tissue and the lack of generalization of the model itself,this paper proposes a multi-axis gated MLP(Multilayer Perceptron)model for the fusion of various modal images(DAGM-Fusion).The model is a dual path(global path and enhanced path),three constraint fusion structure(global constraint,enhanced constraint and global constraint).The global path weak supervision enhances the training of the path,and the three constraints ensure the training smoothness.The model uses the designed multi-axis gating MLP module(Ag-MLP)to focus on one-dimensional feature extraction,and combines CNN to achieve sparse interaction between features.The multi-axis structure of Ag-MLP makes it easy for MLP to work in the shallower or pixel-level small dataset tasks of the network.In addition,the designed patch-loss method automatically generates loss weights for each image block according to the intensity of image pixels,which effectively improves the fusion adaptability and fusion details.A large number of experiments show that the model can achieve high efficiency and high detail fusion in each modal image.
Keywords/Search Tags:Brain image fusion, Deep learning, Patch-GAN, Attention module, MLP
PDF Full Text Request
Related items