Font Size: a A A

Research On Medical Image Segmentation Method Based On Multimodal Fusion

Posted on:2022-08-17Degree:MasterType:Thesis
Country:ChinaCandidate:Y XuFull Text:PDF
GTID:2480306569481794Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the continuous development of image processing technology,the field of medical image segmentation gradually matures.In recent years,a large number of deep learning-based medical image segmentation methods have emerged because of the emergence of convolutional neural networks.Compared with traditional segmentation techniques,deep learning-based methods can achieve better segmentation results.Therefore,it is no longer out of reach for doctors to outline organs and target areas with the help of computer.However,the scarcity of medical data with labeled information brings certain limitations to the training of organ segmentation network models.In order to solve the problem of insufficient data,researchers focused on data enhancement,adjusting network architecture,designing network modules,etc.,and seldom used multi-modal data of the same anatomical structure.The reason is that most of the existing segmentation methods based on multi-modal fusion require paired data.Although this has a good performance for segmenting tumors and other focal areas,the acquisition of multi-modal paired data has certain limitations and requires the cooperation of patients.Therefore,paired data is also scarce.This type of method is difficult to apply because of the relatively easy-to-obtain unpaired data.In addition,because the signal strength of some organs is similar,the edges are not easy to segment,and the problem of pixel misclassification is prone to occur.One of the existing methods is to adopt a two-stage network,that is,locate first and then divide.However,this two-stage network consumes a lot of hardware resources and is not suitable for multi-modal converged networks.Based on the analysis of the above problems,this paper proposes a 3D multi-modal feature fusion segmentation network(MMFNet)and a two-class and multi-class fusion module(TMFB),and mainly work from the following two aspects:First,in order to make full and efficient use of multi-modal information to solve the problem of insufficient single-modal data,a 3D multi-modal feature fusion segmentation network for unpaired multi-modal data(MMFNet)is proposed,and the specific fusion stage is analyzed and experimented.In addition,a modal attention mechanism is proposed to strengthen the unique characteristics of the modal,so that the images of different modalities will be able to obtain learn the unique information of the two modalities when they are fused.Second,in order to improve the effect of edge segmentation and solve the problem of local area misclassification,a two-class and multi-class fusion module(TMFB)is proposed.By guiding multi-class segmentation through a simple two-class task,TMFB will be able to improve the model segmentation effect and apply this module to single-mode segmentation network and multi-modal fusion segmentation network respectively.The experimental results show that on the MMWHS data set,the proposed MMFNet has a good performance compared to the benchmark model,in which the average Dice coefficient of CT is improved by 5%,and the average Dice coefficient of MRI is improved by 3%.Compared with before adding the TMFB,MMFNet with the TMFB module has an average Dice coefficient improvement of 2% on CT,while there is no significant change in MRI,but the segmentation edges of the two modalities have been significantly optimized.
Keywords/Search Tags:Medical organ segmentation, Deep learning, Multi-modal fusion, Binary classification and multi-class fusion
PDF Full Text Request
Related items