Font Size: a A A

Deep Learning Based Medical Image Segmentation Method

Posted on:2024-04-27Degree:MasterType:Thesis
Country:ChinaCandidate:J H ZhangFull Text:PDF
GTID:2530307076996329Subject:Mechanical (Industrial Engineering) (Professional Degree)
Abstract/Summary:PDF Full Text Request
Medical images provide specific presentations of internal organs and tissues within the human body through various imaging modalities and serve as important references for modern clinical treatment.With the continuous improvement of various imaging technologies,medical image processing tasks have become more sophisticated.Among them,medical image segmentation is a key technology for medical image analysis and clinical treatment.It provides information support for medical services such as clinical diagnosis,treatment plan formulation,and prognosis evaluation.In recent years,many researchers have explored this field and proposed various medical image segmentation methods and techniques,from early algorithms based on threshold,region,and edge to deformable models and clustering algorithms based on traditional machine learning.However,traditional algorithms have shortcomings in feature extraction and expression.With the development of deep learning,its application in the medical image field has become increasingly widespread.Among them,deep learning-based medical image segmentation is one of the research hotspots in the field of medical image processing.Compared with traditional methods,deep learning-based medical image segmentation methods have better feature recognition and extraction capabilities and can automatically and accurately complete segmentation.This article conducts research on deep learning-based medical image segmentation methods to improve the accuracy of medical image segmentation.From the aspects of model construction,data processing,feature extraction,and result analysis,this article conducts in-depth research and proposes some effective models and methods.The main research contents of this article are as follows:(1)For the segmentation of thyroid nodules in ultrasound images with varying target sizes,resulting in an imbalance between foreground and background classes,as well as the problem of irrelevant information being passed to the Decoder part through U-Net skip connections,an algorithm for thyroid nodule ultrasound image segmentation based on residual asymmetric convolution and attention mechanism is proposed.Improved Inception-based RI modules and residual convolution modules are designed,combined with two types of asymmetric multiscale convolution kernels proposed,to perform feature extraction,fusion,and decoding in the network’s Encoder and Decoder parts.Attention modules based on SA and CA are introduced to enhance the model’s focus on target regions and boundary information.Experimental results show that the proposed algorithm achieves better segmentation results compared to current mainstream network models.(2)For the problem of low segmentation accuracy in the segmentation of multiple target regions in abdominal medical images due to interferences among organs,an improved Transformer-based multi-target medical image segmentation network is designed based on the summary of the advantages and limitations of the Transformer structure.The network employs an improved Dense Block structure for feature extraction,performs feature fusion on feature sequences of different scales and resolutions using Trans Layers to establish long-range dependencies,and finally accomplishes feature recovery through a convolutional structure with skip connections for the final segmentation.The model is tested on the Synapse and CHAOS datasets,and ablation experiments are conducted on the model structure and related modules.Experimental results demonstrate that the proposed algorithm effectively handles the segmentation tasks of multiple target regions in a single image and exhibits strong generalization ability and robustness.(3)For the segmentation of brachial plexus ultrasound images with a single target region,which is affected by class imbalance between foreground and background as well as dual interferences from other tissues and organs,a ConvTrans-Net combining Transformer and CNN is designed by summarizing the advantages and limitations of CNN and Transformer structures.A TC module is proposed as the foundational structure for feature extraction in the Encoder part.The Mix-STCA feature fusion method is designed in combination with attention mechanisms for feature fusion.The ConvTrans-Net is tested on a publicly available dataset of brachial plexus ultrasound images,and ablation experiments are designed for the TC module and Mix-STCA method.Experimental results show that ConvTrans-Net outperforms the comparative methods in terms of performance.
Keywords/Search Tags:medical image segmentation, deep learning, cnn, transformer, multi-scale feature fusion
PDF Full Text Request
Related items