Font Size: a A A

MR Images Cross Modality Translation With Generative Adversarial Network

Posted on:2022-08-12Degree:MasterType:Thesis
Country:ChinaCandidate:P ZhaoFull Text:PDF
GTID:2504306740498934Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
Magnetic Resonance Imaging(MRI)technology is pervasively used in clinical diagnosis.MR images usually have different modalities,Multi-modality fusion technology improves the accuracy and reliability of automatic medical image analysis and diagnosis system.However,collecting sufficient multi-modality data for medical procedure is costly and difficult.Medical images cross-modality translation,which conducts image translation on the known modality data and get target modality images,proves of practical significance for reducing the collection cost of multi-modality data,enlarging the data set and decreasing the imbalance between different modalities.Medical images cross-modality translation is a branch of image-to-image translation and in recent years,the transfer methods are mostly based on Generative Adversarial Networks(GANs).In the applications of MR images cross-modality translation,there are several difficulties and shortcomings in advanced methods.The existing 2D methods,that based on 2D slices,have the limitations that do not consider the images’ 3D characteristic,i.e.,the voxelwise relationship between different slices.Contrarily,even though the existing 3D methods,that based on 3D voxel patches,can make up for the limitation of 2D methods,they rely on hardware,with high execution cost and low practicability.Furthermore,in these methods,the application of GANs theory haven’t been researched deeply.To deal with the above problems,based on GANs,a novel 3D method,MR images translation GAN(MRi-Trans-GAN),is established.It uses peak signal to noise ratio(PSNR)and structural similarity(SSIM)for translation outcomes’ evolution and surpasses state-of-the-art2 D and 3D methods in these metrics on brain tumor data set,Bra Ts2020 and healthy data set,IXI.MRi-Trans-GAN uses unbalanced 3D patches(128×128×3)with unbalanced 3D convolution,which can consider the voxel relationship between different slices and reduce the hardware pressure for calculation simultaneously.MRi-Trans-GAN uses advanced adversarial loss function,WGAN-GP-SN,i.e.,wasserstein distance with gradient penalty(WGAN-GP)and spectral normalization to replace traditional adversarial losses for better image quality and fidelity.Then,the reconstruction loss of MRi-Trans-GAN is investigated and the overfitting problem of voxel-wise reconstruction loss that calculated by Mean Absolute Error(MAE)is found.Inspired by the 2D perceptual loss in the field of natural images,a novel 3D feature-wise perceptual loss with transfer learning is proposed,which is calculated on a pre-trained Vgg16 net from Image Net and accelerated by special 3D convolutions.The 3D perceptual loss serves as a regularization term of MAE loss,proves suitable for MRi-Trans-GAN and furtherly improves its quality of MR images cross-modality translation.At last,a mixed precision technology is introduced into MRi-Trans-GAN for acceleration.In global mixed precision,some adaptive adjustments on the calculation of instance normalization,spectral normalization and 3D perceptual loss are developed to maintain numerical stability and synthetic quality.Compared to single precision,mixed precision make MRi-Trans-GAN have faster training and predicting speed and less hardware resource consumption but no decay in cross-modality translation results.
Keywords/Search Tags:Generative Adversarial Networks, cross-modality translation, transfer learning, mixed-precision
PDF Full Text Request
Related items