| Medical image segmentation is one of the most important tasks in medical image analysis.This goal is to identify the pixels of the targets(e.g.,organs or lesions)from the background of medical images.Accurate segmentation is essential for the diagnosis,monitoring and treatment of diseases.Various equipments produce different modalities of medical images,and different imaging principles make them represent a same object differently.For example,CT images can show clearer muscles and bones,while MR images can offer a better contrast of soft tissues.Since single-modality medical images are not capable of extracting sufficient information for segmentation,it becomes important and challenging how to make full use of multi-modality data and meanwhile explore the correlation between modalities.To this end,we focus on studying the two situations: complete target modal labels and missing target modal labels.Our main work can be summarized as follows:(1)A novel dual attention based multi-modality fusion segmentation network is proposed.We introduce an attention mechanism for the situation where the labels of target modality are complete.The proposed network pays more attention to fusing the multi-modal features that are beneficial to final segmentation,and meanwhile reduces the interference of irrelevant features between modalities for segmentation.Specifically,at the feature fusion layer,we introduce the channel attention module(CAM)to integrate the channel correlation between modalities;at the decision fusion layer,we introduce the position attention module(PAM)to capture the spatial correlation between modalities.Also,in order to solve the computation explosion of PAM,we further propose the local position attention module(local PAM)for efficiency.Finally,a series of experiments conducted on Bra TS dataset have showed the effectiveness of the method.(2)An unsupervised domain adaptive segmentation network via cross-modality boundary alignment is proposed.For the situation where the labels of target modality are missing,we introduce unsupervised domain adaptation to reduce the domain shift between the source and target modalities for the image segmentation of prostate organs.Specifically,considering the two characteristics of prostate medical images: large variation and indistinct boundary,we design a domain adaptation network of multiscale features and boundaries,respectively.In the common feature space of the network,multi-scale filters are used to capture various features of prostates with different scales;In the image segmentation space of the network,a boundary aware module is designed to solve the issue of unclear prostate boundaries.Finally,extensive experiments conducted on the PROSTATE50 and PROMISE12 datasets have fully validated the effectiveness of the proposed method. |