Font Size: a A A

Research On Hypergraph Based Multi-modality Feature Selection And Classification

Posted on:2020-11-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y PengFull Text:PDF
GTID:2404330590972666Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the development of medical imaging technology,brain image analysis based on machine learning has become a new focus of research,which has been widely used to discover the disease-related biomarkers and assist in the diagnosis of the brain disease.Multimodal medical images has provide a wealth of information for the diagnosis of brain disease.This paper focus on modal fusion to fully exploit modal-specific and modal-related information,which is useful for the feature selection and classification.The hypergraph model is a generalization of the graph on the topological structure,which overcomes the defects of the graph on the representation of the complex relationships and can well depict the high-order relationship.This paper focuses on the application of hypergraphs in multimodal image analysis.The main work and innovations of this paper are as follows:First,a hypergraph based multi-task feature selection method is proposed,which uses multimodal brain images to select the most discriminative features for the final classification.Specifically,we first take feature selection on each modality as a single task and adopt the multi-task learning framework to perform joint feature selection,which make full use of the relevant information among modalities.The group-sparsity regularizer can ensure that the same brain regions across different modalities can be selected at the same time.Further,we incorporate the hypergraph laplacian regularizer to model the high-order relationship among subjects.Finally,a multi-kernel support vector machine is adopted to fuse the features selected from different modalities for the final classification.The experimental results on the Alzheimers Disease Neuroimaging Initiative(ADNI)dataset demonstrate that our proposed method can select more discriminative features and thus improve the performance of classification.In addition,we note that the intrinsic structure of data is not completely revealed by the linear relationship.To solve this problem,we propose a graph diffusion based transductive learning method to evaluate the nonlinear relationships.The graph diffusion based method can efficiently integrate the similarities of subjects derived from different modalities by a cross-modality diffusion process and finally converge to a unified graph.To fully exploit the rich information of the obtained graph,a transductive hypergraph learning approach is designed for final classification,which can effectively capture the complex structures and high-order relationships hidden in the data.The experimental results demonstrate that our method can integrate the multimodal information well and obtain better performance of classification.
Keywords/Search Tags:hypergraph learning, multimodal fusion, feature selection, classification, brain image
PDF Full Text Request
Related items