Font Size: a A A

Research On Multimodal Medical Image Fusion Method Based On Gabor-DCNNs And Double U-Net

Posted on:2022-11-20Degree:MasterType:Thesis
Country:ChinaCandidate:J ZhangFull Text:PDF
GTID:2480306761990629Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the development of medical imaging technology,medical image becomes more and more important in clinical diagnosis.Different modes of medical images provide different information.For example,computed tomography(CT)images are very clear for bone imaging,magnetic resonance imaging(MR)images can clearly reflect the structure of soft tissue,positron emission tomography(PET)images can provide human metabolic information,and single photon emission tomography(SPECT)images can provide clear blood information of tissues and organs.But the first mock exam has its own limitations,which can only reflect the single organization information,and can not provide comprehensive and accurate information.In this case,multimodal fusion technology has become an effective solution,which can extract the image information of different modes and generate a fusion image with richer information and features.It plays a key role in medical diagnosis and clinical operation.Through in-depth study of multimodal medical image fusion and deep learning theory,this paper analyzes the problems existing in the current fusion methods and improves them.The main contents are as follows:In view of the fact that the current multimodal medical image fusion methods can not fully characterize the complex texture and edge information of the lesion in the fused image,a method based on the combination of multi CNN based on Gabor representation and fuzzy neural network(G-CNNs)is proposed to fuse CT and MR of multimodal medical image.The method is divided into two parts.In the first part,the data set is filtered through a group of Gabor filter banks with different proportions and directions to obtain different Gabor representation pairs of CT and MR,and each pair of different Gabor representations is used to train the corresponding CNN,so as to generate "G-CNN" group,that is,G-CNNs;In the second part,fuzzy neural network is used to fuse multiple outputs of G-CNNs to get the final fused image.The experimental results show that the proposed fusion method is significantly better than other fusion methods in objective evaluation and visual quality.Compared with other fusion methods,it can better integrate the rich texture features and clear edge information of the focus of the source image into an image,and improve the quality of multimodal medical image fusion,effectively assist doctors in disease diagnosis.In view of the fact that convolutional neural network needs a large number of training data sets and too long training time for processing image fusion tasks,and the existing image fusion methods ignore the semantic conflict between the source medical images and lose useful semantic information,which is easy to cause the problem of insufficient organization semantics in the fused image,A method based on deformable convolution dual U-Net and semantic loss(2U-F)is proposed to fuse multimodal medical images CT and MR.The method consists of two parts.The first part is the fusion part,which extracts the detail information and semantic information from the source medical image through down sampling and up sampling in U-Net network,and splices it through jump connection to generate the fusion image;The second part is the reconstruction part,which uses the automatic encoder structure in the U-Net network to extract the features of the fused image output by the fusion part,and then reconstruct it through the decoder,so as to optimize the fusion effect.The network of fusion part and reconstruction part adopts the deformable convolution u-net structure with stronger ability to extract medical image features.The combination of reconstruction loss,semantic loss and structural loss is proposed as the overall loss function of the network,so that the network can map the brightness of different modal source images to the same semantic space for image fusion,and then solve the semantic conflict between multimodal medical images.The experimental results show that the proposed fusion method is significantly better than other fusion methods in terms of semantic loss index,can better solve the problem of semantic conflict in the source image,can better integrate rich semantic information into an image,and improve its application ability in the clinical environment.Aiming at the two improved methods proposed in this paper,a multimodal medical image fusion system is developed based on Pycharm platform.The system includes three modules: login,comparison of fusion methods based on G-CNNs and comparison of fusion methods based on 2U-F.Among them,the comparison module based on G-CNNs fusion method realizes the fusion method based on the combination of multiple CNN based on Gabor representation and fuzzy neural network.Compared with other comparison methods,it fuses the same pair of source images and calculates the objective index of the fusion result;Based on 2U-F fusion method,the comparison module realizes the fusion of the same pair of source images and the objective index calculation of the fusion results based on dual U-Net and semantic loss fusion method and other comparison methods.The system can be applied in clinic to help medical workers select higher quality medical fusion images.
Keywords/Search Tags:image fusion, Gabor representation, convolutional neural network, fuzzy neural network, Double U-Net, semantic loss
PDF Full Text Request
Related items