Font Size: a A A

Study On Deep-Learning Methods For Cancer Survival Prediction Based On Multimodal Data

Posted on:2024-08-02Degree:MasterType:Thesis
Country:ChinaCandidate:X Q WuFull Text:PDF
GTID:2544306932455994Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
As a malignant disease with multiple factors and stages,cancer has a serious impact on social and economic development.Accurate cancer survival prediction can help to guide clinicians to choose appropriate treatment,thus improving patients’quality of life.Existing studies have shown that integrating multimodal data from gene expression,copy number alteration and pathology image data can help provide a more comprehensive and multifaceted understanding of cancer patients’ disease progression,thus improving the accuracy of survival prediction.However,existing studies not only rarely consider the heterogeneity between different modalities,but also fail to effectively model the complex relationships of multimodal data,resulting in the inability to fully exploit the rich information between different modal data and learn expressive multimodal representations,thus limiting the improvement of cancer survival prediction performance.To solve the above problems,this dissertation proposes deep-learning methods for cancer survival prediction based on multimodal data,which can significantly improve the accuracy of the model on three cancer datasets:low-grade glioma,breast cancer,and squamous lung cancer by eliminating the heterogeneity of multimodal data and effectively mining the potential relationships between different modalities.The main work of this dissertation is as follows:(1)In order to effectively eliminate the heterogeneity of multimodal data,by combining existing deep learning methods,this dissertation proposes a multimodal cancer survival prediction method CAMR(cross-aligned multimodal representation learning for cancer survival prediction)based on disentangled representation learning.Firstly,a cross-modal representation alignment learning network based on adversarial training is used to learn modality-invariant representations to achieve distribution alignment of different modalities,thus eliminating the heterogeneity between different modalities.Then,the internal relationship between modality-invariant representations is modeled by a cross-modal fusion module,and they are fused into a unified crossmodal invariant representation.At the same time,the method also learns comprehensive and disentangled multimodal representations by effectively learning modality-specific representations.Finally,the method is evaluated by several performance metrics,and the experimental results show that the CAMR method can effectively solve the problems caused by the heterogeneity of different modalities,and its performance is better than the existing methods.(2)In order to model the potential relationships between different modalities,this dissertation further proposes a novel joint disentangled representation learning and cross-modal Transformer for cancer survival prediction CAMR-CT(combined crossaligned multimodal representation learning and cross-modal Transformer for cancer survival prediction).Firstly,the method uses disentangled representation learning network to obtain modality-invariant and-specific representations,and then proposes a cross-modal interaction module to optimize the different representations.Finally,Transformer fusion module is used to further explore the interaction between different modalities,thus learning more discriminative multimodal representations.The experimental results show that CAMR-CT can further improve the accuracy of survival prediction for the above three cancer types by effectively modeling the relationship between different modal data such as gene expression,copy number alteration and pathological images.
Keywords/Search Tags:Cancer Survival Prediction, Deep Learning, Multimodal Data, Data Heterogeneity, Disentangled Representation Learning, Transformer
PDF Full Text Request
Related items