Font Size: a A A

Combined Kernel-Based Nonnegative Matrix Factorization Algorithms With Applications

Posted on:2023-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:X ChenFull Text:PDF
GTID:2568307151479464Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In today’s data-rich era,sourses of information suffer a significant increase.Due to the increase of complexity of data,the different types of data also appear to be increasing.Besides,for practical problems,flexible solutions have been provided by matrix type data which is general applicability.The representation of matrix data is directly related to the learning performance of the model.With the complexity of the internal relationship of matrix data,the dimensions(attributes)of the matrix are also increasing.These relationships cannot be approximated by linear models.Therefore,it is a great significance to propose a reasonable and effective representation algorithm,then reveal the nonlinear characteristics of matrix data.As a branch of representation learning,the complex nonlinear relationship between original space and feature space can be more easily revealed by nonlinear representation algorithms which has stronger nonlinear representation ability.For example,nonlinear mapping functions constructed by the deep neural network which superimposes them layer to layer into a deep structure,which has a strong nonlinear representation ability;The kernel learning method considers that the kernel function maps the original data to a novel feature space,which uses the kernel function to construct a nonlinear representation model,kernel nonnegative matrix factorization is a typical case based on this method.Compared with deep learning methods,nonnegative matrix factorization is based on the concept that a whole is made up of parts which is consistent with practical problems and has the advantage of strong interpretability.Recent studies have shown that due to the various layers of latent features and the limitations of single-kernel functions,most kernel nonnegative matrix factorization algorithms suffer many problems such as missing key information capture and poor anti-noise ability,which also bring challenges to the kernel nonnegative matrix factorization algorithms.Based on many data nonlinear representation learning methods,this paper focus on the robustness of nonnegative matrix factorization algorithms from three different perspectives: enhancing local feature constraints,improving single-kernel function and combating noisy data.According to the above,a new nonlinear representation algorithm has been proposed.At the same time,corresponding solutions are proposed for the nonlinear structure of data and the multi-level characteristics of features.The reliable support for the validity of this study has been provided by the theoretical analysis and experimental evaluation.The main work of this paper is as follows:1.Aiming at the problem of missing local key feature representation of nonnegative matrix factorization,a directed graph clustering algorithm based on kernel non-negative matrix factorization is proposed.The kernel function used in the algorithm contains two parts for nonlinear feature representation.On one hand,the global structural information(macro-level features)of the directed graph is represented by the fractional power inner product kernel;On the other hand,the normalized linear kernel is used as part of the regularization term to enhance the local feature(micro-level feature)constraints.Experiments show that the proposed algorithm has better clustering quality on multiple directed graph datasets than deep walk and nonnegative matrix factorization algorithms without kernel function and local constraints.2.Considering the limitation of single-kernel function,a new combined kernel function is constructed,and a combined-kernel non-negative matrix factorization algorithm is proposed.Furthermore,the convergence of the algorithm has been strictly proved.A novel combined kernel function is constructed for the multi-level general problem of features in images.The global features(face color,contour,etc.)are represented by the fractional power inner product,and the local features(people facial expressions,eyes,nose,etc.)are represented.Experiments on six face datasets show that the multi-view feature extraction of combined kernels helps the algorithm to improve feature integrity and sparsity,which are characteristics that traditional linear algorithms and single-kernel functions do not have.3.From the perspective of algorithm robustness,by studying the influence of different degrees of gaussian noise on the algorithms,it is proposed not to use Frobenious norm to describe the objective function,which reduce the sensitivity of Frobenious norm to noise.Although the kernel function can guarantee the sparsity of features under low-intensity noise,when the noise intensity is too high,the obtained feature sparsity is also greatly reduced.Experiments with noise show that the proposed algorithm is more resistant to noise compared with existing algorithms,and can resist the influence of different degrees of Gaussian noise.
Keywords/Search Tags:Image recognition, Directed graph clustering, Nonnegative matrix factorization, Combined-kernel function, Sparse feature
PDF Full Text Request
Related items