| In recent years,brain imaging modality transfer,such as brain image reconstruction,brain image noise reduction and brain image segmentation,has become popular in the field of medical imaging,which plays a very important role in the clinical diagnosis and condition detection of brain diseases.The brain imaging modality transfer aims to learn the mapping relationship between different modality brain images and realizes the conversion of one modality brain image to another modality brain image,so as to solve the situation that some imaging technologies are not available or some modality data is incomplete.The acquisition of multi-modal data also provides diverse information for clinical diagnosis doctors.The commonly used brain imaging modality transfer methods can be divided into supervised learning based methods and unsupervised learning based methods.Supervised learning based methods rely on a large number of paired training data with label.These paired data with label information need to be manually obtained and labeled by experts,which is generally difficult to obtain.Unsupervised learning based methods do not depend on any labels.Therefore,they are more universal.However,these methods only use the reconstruction loss of pixels to measure the gap between the input and the output,resulting in the generated images which are usually blurred or distorted.Focusing on the above issues,this paper proposes a brain imaging modality transfer framework based on unsupervised learning,named BMT-GAN.This framework uses the basic theoretical knowledge of Generative Adversarial Network(GAN),and introduces a combination of cyclic consistency loss and adversarial loss,which avoids the problem that traditional generative adversarial networks cannot achieve input-output pairing.In addition,a set of non confrontational losses for the output reconstructed images and the reference images are introduced as additional constraints to improve the quality of output,which not only reduces the perception and style differences between the input and output modal images,but also enhances the global consistency between the input and output images.On the other hand,today’s brain imaging modality transfer mostly transforms one type of modality data from one domain to another domain,such as the BMT-GAN framework proposed above.In specific clinical diagnosis,multiple modality data can be obtained in the same scanning field,and the use of the diverse characteristics of multiple modality data is more conducive to synthesizing missing modal data.Therefore,we introduce the idea of selfsupervised learning based on the unsupervised method,named Ssl-GAN framework.The framework constructs multi-branch input,so that the framework can learn the diverse characteristics of multiple modality data.Besides,their own supervision information is mined from large-scale unsupervised data by establishing auxiliary tasks,and the network is trained by constructing supervision information,which not only ensures the similarity between the input and output modal images,but also can learn valuable representations for downstream tasks. |