Font Size: a A A

Multi-modal Image Segmentation Of Brain And Abdomen Based On Improved Unet

Posted on:2020-06-22Degree:MasterType:Thesis
Country:ChinaCandidate:X H MengFull Text:PDF
GTID:2404330602952267Subject:Engineering
Abstract/Summary:PDF Full Text Request
Deep learning is the foundation of artificial intelligence.In recent years,the application of deep learning is becoming more and more widely in image processing.There are many tasks that based on image,such as image recognition?object detection?target segmentation and so on.This paper focuses on medical image segmentation.In this paper,our research subjects are magnetic resonance medical imaging(MRI)and positron emission computed tomography(PET),which are low field abdominal MRI and brain MRI and PET multimodal medical imaging respectively.The segmentation target of abdominal MRI images is the gastric region,which belongs to the human organ.Because the abdominal MRI images in this article mainly assist the focus of radiotherapy in the radiotherapy process,the low field MRI is used to make the image quality poor,the contrast is low,the gastric edge is blurred,and is unfavorable to the stomach segmentation.In order to get a good segmentation result,a full convolution neural network segmentation method based on migration self-coding network constraints is proposed in this paper.The target of brain image segmentation is epileptic lesion area,which belongs to the lesion area of brain tissue.Brain imaging mainly includes MRI and PET.a full convolution neural network(Y-net)based on multimodal feature fusion is proposed.The network can learn epileptic lesions by using MRI and PET images simultaneously,and combines the complementary segmentation idea of foreground and background.In summary,the following three aspects are studied in detail.(1)A self-encoded network constraint for fulling convolution neural network migration learning is proposed.The fulling convolution neural network U-net is used as the basic network,and a self-encoded network is constructed to learn the structural information of the gastric region labels.The trained self-coding network acts on the abdominal MRI image label map and the U-net network's segmentation result graph for the abdominal MRI image respectively,and constructs a Dice loss by using two reconstruction map img1 and img2 to guide U-net.The U-net network structure is completely symmetrical,and the network structure is simply described as follows: the four down-sampling layers,the convolution layer with different number of convolution kernel among the down-sampling layers,the four up-sampling layers,the same with the convolution layer with different number of convolution kernels,and the down-sampling structure path is also known as the encoding layer,and the up-sampling structure layer is also known as the decoding layer.There is a jump connection layer between the encoding layer and the decoding layer.The jump link layer is used to fuse the shallow convolution feature and the deep convolution feature,which combines the high-level semantics and the information of the bottom layer,which can well meet the segmentation's need on the high-level semantic information.First,we use U-net network to learn texture information in the stomach region on high quality abdominal MRI images,and use label of low quality image to train the self-coding network to learn the contour and shape information of the stomach region,and then to fine-tune the pre-trained network in low quality abdominal images.(2)A method of epileptic focus segmentation based on multi-modal feature fusion with complementary foreground and background is proposed.The two branch networks of the multimodal network are used to learn MRI image features and PET image features respectively.The network structure of the branch network adopts the encoding and decoding structure of U-net.The deep level structure of multi-modal network is used for feature fusion of MRI image features extracted from branch networks and PET image features.For the fusion method of feature,we use feature layer superposition and feature layer weighted summation.Experiments show that the method of feature layer superposition is more suitable for multi-modal image feature fusion and segmentation of epilepsy.In addition,we use the above model to focus on learning the background area of non-epileptic focus,and use the Background-Based learning model and the epileptic focus-based learning model to fuse the segmentation results at the decision level.Experiments show that the combination of Background-Based segmentation method can complement and improve the segmentation method only based on foreground.(3)For the problem that the number of medical images is small and insufficient to enable the network to fully learn the distribution rules of image data,we used a simple image processing method to expand the image in the first two jobs.In order to make full use of limited image data,we propose a segmentation method for epileptic lesions based on Cycle GAN image modality transformation.First,we use Cycle GAN to train the mapping relationship between MRI and PET images,so that we can get more samples by transformation between the two modes.Experiments show that the conversion from MRI to PET is more real,we can improve the result with PET images generated.
Keywords/Search Tags:transfer learning, image segmentation, fulling convolution neural network, MRI image, PET image
PDF Full Text Request
Related items