Font Size: a A A

Nonconvex Regularization For Bi-level Sparse Optimization

Posted on:2020-03-07Degree:MasterType:Thesis
Country:ChinaCandidate:S F LiuFull Text:PDF
GTID:2370330599954489Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Sparse optimization is a popular topic in optimization.It has been successfully applied in compressed sensing,image processing,machine learning,biological information and so on.It aims to recover high-dimensional sparse signals from small samples;for this reason,scholars proposed the?0 minimization model.However,it has been found that some practical problems are not only sparse,but also have special structures,such as the structure of group sparsity and intra-group sparsity,Friedman et al.took the combination of 1?and?2,1 as penalty function and thus obtained sparse-group sparse model.The model has been applied to deep neural networks,feature selection and other problems successfully.Inspired by the sparse-group sparse model and the fact that?0 owns a better sparse characterization than?1,?nonconvex?bi-level sparse optimization model is proposed in this paper by combining?0with?2,0,the stability theory and algorithm convergence theory also be conducted.The main work and contributions of this paper are as follows:Firstly,this paper proposes a bi-level sparse optimization model for the underdetermined linear system and the nonlinear system respectively.The bi-level sparse optimization model introduces the penalty terms of?0 and?2,0,and depicts the structure of group sparsity and intra-group sparsity,which will provide a new modeling idea for a wide range of engineering applications.Secondly,the stability theory of the two bi-level sparse optimization models are studied.For the linear system,the group sparse eigenvalue condition for linear operator A is proposed,and the Oracle property and recovery boundary of the model are obtained under the condition.For nonlinear system,the group restricted strong convexity conditions for loss function are presented,and the Oracle properties and recovery boundary are established under the condition.It is worth to note that the Oracle property of these two models does not require any regularity assumptions,and the conditions that the recovery bound requires are also weaker than current literature needs.The stability theory of the model will lay a theoretical foundation for the successful application and promotion of the bi-level sparse optimization.Thirdly,this paper designs the corresponding proximal gradient algorithms for the two models with continuation technology,and establishes the convergence theory.Specifically,apllying the framework of KL theory,the algorithms are proved to be globally convergent;the convergence point are demonstrated to be the local minimum point of the bi-level sparse optimization model by combining the special structure of penalty function;the linear convergence rate of th algorithm also be analyzed.The convergence theory of the algorithm will provide the algorithm foundation for extensive engineering calculation and practical application.Fourth,a large number of numerical experiments were carried out in this paper,and the algorithm designed in this paper was compared with some state-of-the-arts algorithms in machine learning,such as ADMM,ISTA,OMP,FoBa,GSparO,SPGL1.Experimental results show that the performance of the model and algorithm designed in this paper is obviously better than existing algorithms by rationally using the structure of bi-level spasity.
Keywords/Search Tags:Inverse linear problem, Nonlinear system, Bi-level sparse optimization, Stability theory, Proximal gradient algorithm
PDF Full Text Request
Related items