| Tumor segmentation based on multimodal magnetic resonance imaging(MRI)plays an important role in disease diagnosis and treatment.Traditional medical image segmentation relies on manual segmentation,which is often dependent on the state and level of the segmentation operator,has poor reproducibility,and is time-consuming and labor-intensive.With the development of artificial intelligence technology,deep learning-based medical image segmentation has become a hot research topic.The deep learning-based image segmentation methods typically use convolutional operators to extract local features of the image in a sliding block form.The Transformer algorithm was later proposed to preserve the global information of the image by dividing the original image into blocks and adding positional information.Although these methods are widely used in brain tumor segmentation,most brain tumor segmentation models ignore the multimodal nature of brain tumor images and simply concatenate the multimodal data for feature learning.This method cannot effectively learn complementary and specific features between different modal data,thereby affecting the segmentation accuracy.Therefore,this paper conducts research on a multimodal fusion algorithm for brain tumor image segmentation,which aims to improve tumor segmentation accuracy by learning unique features of different modalities.This paper mainly completed the following research:(1)by reading cutting-edge related literature,exploring the current research status of medical image segmentation at home and abroad,and summarizing the advantages and disadvantages of existing methods;(2)through the specific analysis of multimodal brain tumor image data,different modalities can provide different tumor information,which explains the necessity of multimodal data fusion methods and determines the research direction of this paper;(3)proposed a brain tumor image segmentation model based on deep residual learning and multimodal branch feature fusion.By extracting and fusing differentiating and complementary features between different modalities,the image segmentation performance of the model is improved.This method was validated on the Bra TS2021 dataset,and the experimental results showed that the model achieved competitive segmentation accuracy,with Dice values of 83.3 for enhancing tumor,89.07 for tumor core,and 91.44 for the entire tumor;(4)proposed a multimodal fusion brain tumor image segmentation model based on channel swapping and Swin-Transformer.After classifying different modalities,feature modalities of different categories were extracted and channel-swapped to obtain locally complementary features,and Swin-Transformer was used to further fuse global and local features.This model achieved competitive segmentation accuracy on the Bra TS2021 dataset,with a Dice value of 91.07 for the entire tumor. |