Font Size: a A A

Robust Sparse Non-negative Matrix Factorization Based On Logarithmic Norm Constraint

Posted on:2023-09-14Degree:MasterType:Thesis
Country:ChinaCandidate:Y Q ZhangFull Text:PDF
GTID:2568306833965379Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Nonnegative matrix factorization(NMF)is widely used in various problems in the machine learning area.This algorithm decomposes the nonnegative data into pure additive combinations of local features,which is in line with the human cognitive way of information.The nonnegative constraints of NMF methods renders the factor matrices to be sparse,which better reveals parts-based representation.NMF has been a hot research topic in recent years and many variants of NMF models have been developed since it first appeared.However,existing NMF methods still suffer from several common issues:The l1,-based constraints cannot well restrict the sparsity of the factor matrices;The l2,1-based column-wise sparse constraint cannot well restrict the column-wise sparsity of noise,which leads to degraded performance when handling noisy data.To solve the above problems,the main research work is as follows:(1)In this thesis,a sparse NMF model based on log norm constraint is proposed.The llog-norm is used to constrain the base matrix and coefficient matrix at the same time,which generates a sparser solution and reveals a better part-based representation;In this thesis,the Graph Laplacian constraint is added to the algorithm to deal with the linear inseparability of data and maintain the local similarity of data.(2)This thesis proposes a robust nonnegative matrix factorization algorithm based on l2,log-norm constraints.This algorithm is an extended version of the above algorithm,which aims at mitigating the adverse impact of noise in data on the performance of the algorithm.The new method introduces a component matrix to account for the noise effect,which is assumed to have column-wise sparsity with the l2,log-norm based constraint.The shrinkage operator of l2,log-norm is given as the solution to the l2,log-norm related threshold optimization problem,which can be generally used as a solution to other problems that restrict column-or row-wise sparsity.(3)Efficient multiplicative updating rules are designed to solve the proposed models,which are guaranteed to produce nonnegative solutions with convergent properties(4)Extensive experiments are conducted on 10 benchmark data sets to show the effectiveness of the proposed method.Moreover,noisy data sets are generated with three types of noise,which are used to evaluate the robustness of the proposed method to noise effects.The experimental results show that the proposed method is effective in clustering and data representation from the perspective of practical application.Experiments on convergence analysis verify the convergence of the algorithm from the number of iterations and iteration time,and prove the feasibility of the algorithm.And the Empirical results show that the proposed models are efficient in convergent behavior,and the obtained factor matrices preserve significant sparse structures.
Keywords/Search Tags:Data Processing, Nonnegative Matrix Factorization, Clustering, Feature Extraction, Sparse Representation
PDF Full Text Request
Related items