Font Size: a A A

Research On Explainable Aided Diagnosis And Causal Effect Estimation

Posted on:2023-06-05Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y GuoFull Text:PDF
GTID:2544306845998959Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
As the country attaches increasing importance to medical resources,the use of modern artificial intelligence technology to guide medical technology advances and promote a balanced distribution of medical resources has become one of the country’s important initiatives.However,limited by the black-box nature and unexplainability of existing AI models,it is difficult for current aided diagnostic models to gain the full trust of healthcare workers.In addition,the selection bias and lack of counterfactuals in the available patient data will inevitably lead to biased estimates of the effects of treatment options in causal inference models when evaluating the effects of different treatment options,resulting in wrong treatment being recommended.To overcome the above problems,there is an urgent need to construct an explainable personalized aided diagnosis model with the ability to make accurate treatment effect estimation,so as to meet the demand for aided diagnosis models in smart medicine.The main research results accomplished in this thesis are as follows.(1)To address the problem of explainable diagnosis in medical environments,this thesis proposes a GBDT-based case level explainable aided diagnosis model.The efficient construction of the decision model is achieved by an adaptive gradient boosting decision tree(Ada GBDT)model,based on which a feature importance embedding algorithm based on two-way mutual information backtracking is proposed for obtaining the case-level feature diagnostic importance.Finally,a case-based reasoning(CBR)and Ada GBDT decision making is used in the embedding space to complete the localization of difficult samples.Experiments on two publicly available medical datasets demonstrate the effectiveness of the algorithm.(2)A causal effect estimation model based on Transformer representation learning(CETransformer)is proposed to address the problem of selection bias in observed data in existing medical settings.To fully exploit the feature correlations between the corresponding samples of different treatment groups,a self-attention mechanism under the Transformer architecture is used to obtain a more representational power representation.In addition,to alleviate the distribution bias problem due to selection bias,generative adversarial networks are used to balance the distributions of treatment and control groups in the representation space.Experimental results on three real-world datasets demonstrate the advantages of the CETransformer proposed in this paper over current state-of-the-art causal effect estimation methods.(3)To address the problem of inaccurate estimation of treatment effects based on the lack of information in a single subspace,a separable subspace representation learning model for causal effect estimation is proposed.The proposed separable subspace consists of a shared subspace and a treatment independent subspace,which can acquire shared information while being able to effectively retain case-specific information dependent on the treatment,thus facilitating learning of more informative representations.In addition,to ensure the irrelevance of information between the shared subspace and the independent subspace,the Hilbert-Schmidt independence is introduced in this thesis to replace the mutual information index.Finally,the structural information in the original space is effectively maintained in the representation space by local similarity preservation learning.The experimental results show that the model has excellent treatment effect estimation ability.
Keywords/Search Tags:Explainable aided diagnosis, Causal effect estimation, Transformer, Subspace representation learning, Deep learning
PDF Full Text Request
Related items