Font Size: a A A

Multi-modal MRI Image Segmentation Of Brain Tumors Based On Transformer Model And Its Application

Posted on:2024-06-30Degree:MasterType:Thesis
Country:ChinaCandidate:Q LiuFull Text:PDF
GTID:2544307160478144Subject:Engineering
Abstract/Summary:PDF Full Text Request
The incidence and mortality rates of brain tumors have been increasing year by year due to factors such as the tumor’s unique location and the difficulty of treatment.Currently,the most effective strategy for brain tumor patients in clinical diagnosis and treatment is early detection,early diagnosis,and early treatment.The earlier a brain tumor lesion is detected,the more favorable it is for professional physicians to make better clinical evaluations and subsequent treatment recommendations.Therefore,early diagnosis of brain tumors has great significance in clinical evaluation and the development of the entire treatment plan.Traditional clinical diagnostic methods for brain tumors mainly include using computed tomography scans to detect areas with high contrast,magnetic resonance imaging to detect tumors with low contrast,and pituitary tumors,etc.However,due to the significant differences in the morphology of brain tumors,varying sizes and high heterogeneity of tissue structures in different sub-regions,there are problems in clinical diagnosis and treatment of brain tumors such as large manual segmentation workload,poor repeatability,lengthy analysis time,and easy occurrence of human error.In recent years,rapid advances in computer-aided diagnosis and deep learning technology have enabled advanced brain tumor MRI image segmentation techniques to provide clinical doctors with faster,more accurate and reliable tumor diagnosis and treatment plan formulation.However,current existing methods tend to ignore the differences and connections between feature information across multiple modalities of brain tumors,and the dimensions considered when modeling global context information are often relatively single,resulting in much room for improvement in the accuracy and precision of brain tumor MRI image segmentation.To mitigate these,this article proposes the Multi-view Coupled Cross-Modal Attention Network(MCCANet)model based on a Transformer model that extracts and fuses feature information among the four modalities of brain tumor MRI image data and globally models the spatial and depth dimensions of tumor features.The MCCANet segmentation network introduces cross-modal attention mechanisms between the four modalities of brain tumor MRI images and has a coupled Transformer model used for segmented networks.MCCANet can extract spatial information among multiple modalities while fully utilizing this spatial relationship from both the depth dimension and spatial dimension to model tumor feature information,thus achieving higher accuracy in brain tumor MRI image segmentation results,which better assists professional physicians in the clinical diagnosis and treatment of brain tumor patients.Finally,this research conducts training on the Bra TS2019 and Bra TS2020 datasets and compares the performance of the MCCANet segmentation model with three different brain tumor MRI image segmentation models through experiments.Moreover,statistical significance analysis was conducted on the bulk of experimental results obtained on two datasets,proving the accuracy and reliability of the evaluation indicators of the MCCANet segmentation model in the three nested sub-regions,i.e.,Whole Tumor(WT),Tumor Core(TC),and Enhancing Tumor(ET).At the same time,the ablation experiments of the multi-modal fusion module and the MCCA Transformer network layer,as well as the hyperparameter sensitivity analysis of the MCCANet model,are also set to verify the correctness and effectiveness of the proposed methods and models.Extensive experiments on the Bra TS2019 and Bra TS2020 datasets show that in terms of the accuracy and precision of brain tumor MRI image segmentation,MCCANet has obtained more accurate segmentation results than the 3D methods.Specifically,the Dice scores for WT,TC,and ET regions increase by 16.69%,13.26%,and 12.38%,respectively,while Hausdorff distance is reduced by an average of 4.9825 mm,3.3165 mm,and 17.391 mm.
Keywords/Search Tags:Transformer Model, Cross-Modal Attention, Multi-Modal Fusion, Brain Tumor Segmentation, MRI
PDF Full Text Request
Related items