Medical images segmentation is an important step in the process of medical diagnosis.The automatic segmentation of lesions can save a lot of time for physicians.Therefore,medical image segmentation is a hot research topic.Deep learning methods emerged in recent years have the advantages of good effect,simple process and low human participation,so deep learning is also used in the field of medical image segmentation.U-Net is a medical image segmentation algorithm based on deep learning,which has been widely used and studied.In this paper,aiming at the problems of low segmentation accuracy of U-Net model and unsuitability for processing large spacing 3D images,the following work is mainly done:(1)For the problem of low accuracy of U-Net,this paper combines squeeze-andexcitation(SE)structure with the pyramid pooling module,it can expand receptive field of each pixel in the image and weighting channel,suppress useless features.Dice similarity coefficient(DSC)is used as target in this paper for the experiment.the experimental results on nasopharyngeal carcinoma 3D medical image dataset show that the DSC will be improved from 66.17% to 68.32% by combining the pyramid-pooling module and U-Net,and the DSC will be further improved to 75.71% by combining the SE structure,compared with v-net and deeplab v3+,the model has better performance.In PDDCA 3D medical image dataset,the model was used to segment 9 organs,with an average DSC of 78.04%,4.6% higher than U-Net,0.6% higher than other state-ofthe-art model of the dataset.(2)Aiming at the problem of large spacing in 3D medical image data,based on the above model,this paper proposed a segmentation network based on convolutional gated recurrent unit(CGRU).The above model has the advantage of higher accuracy,but 3D convolution in the model is not suitable for large spacing 3D image.In order to make use of the advantages of the above model and solve the problem of large sequence spacing,this paper modified the 3D convolution in the above model into 2D convolution,use the modified model to extract 2D features,and then used CGRU to extract features on the dimension with large spacing.In combination with 2D convolution and CGRU,3D features in the 3D images with large spacing can be extracted effectively.On the nasopharyngeal carcinoma 3D medical image dataset,DSC of the model is 77.46%,which is 1.7% higher than that of the model in the first working point.On the PDDCA 3D medical image dataset,the average DSC of the model on 9 organs reached 81.62%,3.6% higher than that of the model in the first point,4.2% higher than other state-of-the-art model of the dataset,which verified the effectiveness of the model. |