Font Size: a A A

Multi View Classification Model Based On The Similarities Across Views And Samples

Posted on:2022-12-25Degree:MasterType:Thesis
Country:ChinaCandidate:X Y LiFull Text:PDF
GTID:2518306776493554Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the continuous development of data storage capacity and the advancement of data collection technology,a large amount of data is generated and collected in real world.Generally,an object can often be described by multiple different features,and this type of data is called multi-view data.In multi-view data,each view describes the object from different aspects,so it contains different information of the object.For multi-view data,the commonly used single-view learning methods usually directly concatenate all views into a single view.However,when the feature dimension of each view is large,they are prone to overfitting,and these methods ignore the inherent characteristics of multi-view data.How to effectively fuse multiple views information to compensate for the limitation of single view information has always been an important research hotspot,and this kind of method is called multi-view learning.In recent years,a large number of multi-view learning methods have been proposed.However,existing works focus on(1)capturing the common and complementary information across views(2)exploiting the similarity information between view pairs to capture the potential relationship between views.Besides,for the latter,we find that existing methods for calculating view similarity cannot truly capture the similarity between views.Towards addressing these issues,this thesis proposes a view similarity calculation strategy based on distance correlation to effectively capture the real similarity between views.Besides,we propose a novel approach called Multi-vi Ew Laten T space learning with Similarity preservation(MELTS)for multi-view classification.It aims to learn more effective latent representations by preserving the similarity information between view pairs and label consistency information between sample pairs.The main contributions of this paper can be summarized as the following points:· This thesis proposes a view similarity calculation strategy based on distance correlation.When exploring the similarity among different views,the feature dimensions of different views in multi-view data are usually different,which brings great challenges to the calculation of the similarity among views.To solve this problem,this thesis finds that the proposed method for calculating the similarity among views may not be able to accurately calculate the similarity when the number of data samples or the distance between sample pairs is large,and proposes to use a new method distance correlation as an alternative to accurately capture the real similarity among views.· This thesis proposes a novel multi-view classification method MELTS.Currently,some methods based on the principles of consistency and complementarity learn latent representations by increasing the dissimilarity of complementary information between different view pairs.However,different views may have potential relationship,encouraging complementary information between different view pairs to be as different as possible may ignore the true relationship between view pairs.By utilizing view pair similarity preserving term and label pair consistency preserving term,MELTS is able to simultaneously capture both(1)the similarity information between different view pairs and(2)the label consistency information between different sample pairs,which can reveal the relationship among views and improve the discriminative ability of the learned representations.· This thesis conducts experiments on synthetic datasets and widely used real world datasets,respectively.Comprehensive experimental results demonstrate that distance correlation can effectively capture inter-view similarity.In addition,for the MELTS method,We have conducted comprehensive analysis on the effects of classification,the validity of the core mechanisms,the sensitivity of parameters,the convergence,the validity of the learned representations,and the operation efficiency.The experimental results demonstrate the effectiveness of MELTS.
Keywords/Search Tags:Multi-view classification, Latent representation learning, View similarity
PDF Full Text Request
Related items