Font Size: a A A

Medical Image Fusion Algorithm Based On Deep Convolution Neural Network

Posted on:2022-07-08Degree:MasterType:Thesis
Country:ChinaCandidate:X YaoFull Text:PDF
GTID:2504306314480804Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the rise of artificial intelligence,digital medical technology and intelligent medical technology gradually step into the medical diagnosis system.In the direction of medical image processing,computer vision technology has made a new progress.Information richness and image clarity are the important criteria for clinical diagnosis.The advantages of medical imaging and anatomical imaging are not fully utilized.The two imaging methods are effectively combined by fusion technology,which can maximize the patient’s pathological information,assist doctors to diagnose the disease and make up for the deficiency of medical image information in single mode.Single mode medical image features are single,and the amount of information is less;the multiple modal image information after fusion is more complete,providing rich information for medical diagnosis.At present,the types of diseases present diversified and complex characteristics,which makes the requirements of medical diagnosis technology further improved.Traditional medical image fusion algorithms have fuzzy feature points and fusion artifacts,which make the fusion results not accurate and the details are not obvious.Aiming at the problem of fusion artifact and insufficient training data of medical image,the following research is carried out in this paper.(1)Aiming at the problem of fusion artifact,a medical image fusion method based on LPCNN is proposed,which combines Laplacian Pyramid with Convolutional Neural Networks(CNN).Firstly,the source image is decomposed by Laplacian pyramid.Secondly,the CNN is improved by minimizing the structural risk,and the dimension of the convolution layer is reduced by setting the step size.Then,the optimal weight W is generated by iterative optimization to determine the parameters of CNN to guide image fusion.Finally,the fusion image is generated by inverse reconstruction of Local Laplacian Pyramid(LLP).Through the simulation experiment,it is found that the algorithm overcomes the fusion artifact problems of traditional medical image fusion algorithm,and the fusion effect is better,which provides favorable image information for accurate localization of lesions and surgical treatment.(2)A medical image fusion method based on improved Generative Adversarial Network(GAN)is proposed.Due to the shortage of medical image data,this paper uses Transfer Learning to transfer the parameters of GAN generated from visible and infrared images to the improved CNN.Firstly,a large number of infrared images and visible images in the source domain are used to construct the prepare training network model.Secondly,the representative semantic information is extracted in the fusion process,and the feature mapping between the fusion image and the source image is learned and transformed into network parameters.Then,a small amount of CT and MR data in the target domain is used to fine tune the model,the parameters are transferred from the prepare training network to the improved CNN.Finally,the weighted graph W is used to guide the source images of CT and MR to be fused,and the high-quality fusion images are obtained by LLP reverse reconstruction.Through the simulation experiment,it is found that the algorithm can overcome the problem of insufficient training data of medical image.The fusion image keeps the edge,texture and other details of the source image,and the fusion effect is good.
Keywords/Search Tags:medical image fusion, convolutional neural network, artifact, generative adversarial network, transfer learning
PDF Full Text Request
Related items