Font Size: a A A

Robust Principal Component Analysis And Its Applications

Posted on:2020-02-03Degree:MasterType:Thesis
Country:ChinaCandidate:C ZhangFull Text:PDF
GTID:2370330602952272Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
In the era of digital information,large-scale data with high dimensions has emerged,such as Ultra HD images,video sequences and biological information data.Due to the few samples,missing data and noise pollution,these high-dimensional data generally have the characteristics of sparsity,strong noise and redundancy.How to separate the information from these unsatisfactory data is a hotspot in the fields of pattern recognition,machine learning and data mining,which is also the focus of this paper.The subspace learning model compresses the high-dimensional data into the low-dimensional subspace,thereby extracting the structural information of the original data.Traditional subspace learning models,such as principal component analysis?PCA?,do not work well when dealing with high-dimensional data.Driven by theories of sparse representation and compressed sensing,subspace learning models based on sparse and low rank constraints are emerging.Sparse and low-rank constraints adapt to the characteristics of high-dimensional data and satisfactory results have been obtained in practical applications.This paper focuses on robust principal component analysis?RPCA?,which is a fundamental model based on sparse and low rank constraints.Firstly,the related models and algorithms of subspace learning are systematically reviewed,including PCA and RPCA.Then,the RPCA model is improved to enhance the learning efficiency and applicability.Specifically,the main research of this paper is summarized as follows.Firstly,two fast algorithms for RPCA model are proposed.RPCA needs to optimize the nuclear norm,in which the singular value decomposition?SVD?is needed.The time complexity of SVD is very high,which restricts the efficiency of the model.Randomized singular value decomposition?RSVD?projects the original matrix into a low-dimensional space,which can reduce the size of the matrix when performing partial SVD,thus improving the operation speed.The proposed algorithms combined RSVD can improve the efficiency of the model and hardly lose the accuracy.Secondly,a low rank matrix decomposition model with column sparse constraint is proposed.RPCA uses l1-norm to constrain sparse matrix and treats each element independently,which is ineffective in describing structured noise.For the case of column noise in data,the proposed model introduces l2,1-norm for sparse constraint.In addition,utilizing the idea of matrix decomposition and approximating the rank by factor matrix,we can avoid the SVD operation and further improve the efficiency of the algorithm.The proposed model combines l2,1-norm and matrix decomposition,which can separate column noise from the original matrix and improve the computing speed.In summary,based on the efficiency and applicability of RPCA,this paper proposes two fast algorithms and an improved model.We test our model in random matrix decomposition,image denoising and video background modeling.The experimental results show that the improved model can significantly improve the efficiency and successfully separate the column noise.
Keywords/Search Tags:subspace learning model, robust principal component analysis, randomized singular value decomposition, matrix decomposition, video background modeling
PDF Full Text Request
Related items