In order to assist doctors in analyzing and treating the overall condition through the effective part of a patient’s image,medical image segmentation is based on special pathological features in medical images to separate the focus areas in medical images.Generally,medical image segmentation is a professional operation performed by relevant doctors.With the rapid progress of artificial intelligence technology,doctors can improve the efficiency and accuracy of medical image segmentation through more intelligent means with the assistanced of artificial intelligence.Medical image segmentation plays an important role in locating,identifying,and visualizing lesion areas in medical images.The results obtained from medical image segmentation are parts of the important bases for doctors to formulate surgical plans.Currently,the network architecture based on deep learning may lose some important key information and details when extracting shallow level feature maps.The loss of these important information and key details is likely to cause errors in clinical diagnosis,and there is still room for improvement.This article focuses on the core content such as the loss of important information and key details in medical image segmentation.And conduct relevant research by improving the Deep Lab V3+framework and the U~2-Net framework.The main work and innovation points of the paper are as follows:(1)In this paper,the experimental dataset,namely the LGG dataset,is normalized to reduce the high variance between image pixels;while increasing the size and heterogeneity of the dataset through image rotation,vertical and horizontal movement,scaling,and cropping operations.(2)Aiming at the problem that the boundary details of brain glioma lesions are not sufficiently detailed,this paper proposes an improved glioma medical image segmentation network framework based on Deep Lab V3+.Deep Lab V3+backbone is usually Mobile Net V2 or Xception network architecture.This model replaces the original Deep Lab V3+backbone with Conv Ne Xt backbone architecture which has better performance.The Swin Transformer architecture,a combination of window and sliding window multi-focus modules,is integrated into the lower branch of Deep Lab V3+to capture the key details of the shallow feature map that were previously unavailable.Thus,the dual fusion of deep features and shallow features can be realized,so that the focus area of medical images can be extracted more accurately and the edge details can be divided into more clear and distinct segmentation results prediction map.The effectiveness of this method is proved by experiments on LGG data sets,and more accurate segmentation results are obtained.(3)Aiming at the problem of low accuracy of overall segmentation in glioma brain tumors,this paper proposes an improved medical image segmentation network model for glioma brain based on U~2-Net.In this model,the original U-shaped residual block encoder at the bottom of U~2-Net is replaced by Mobile Vi T block module to capture the feature information of global remote context association.Moreover,an attention module composed of spatial attention and channel attention is added between the penultimate U-shaped residual block encoder and the first U-shaped residual block decoder for residual connection,so as to grasp the global overall contour and improve the overall segmentation accuracy.The effectiveness of the algorithm is verified by experiments on LGG data sets,and the accuracy of segmentation is improved. |