Font Size: a A A

Analysis And Research Of Proximal Gradient Method For A Class Of Nonsmooth Convex Optimization Problems

Posted on:2020-08-28Degree:MasterType:Thesis
Country:ChinaCandidate:M XieFull Text:PDF
GTID:2370330623456666Subject:Mathematics
Abstract/Summary:PDF Full Text Request
In this paper,we study the convex optimization problem which objective function is the sum of smooth loss function and non-smooth regular function.It has been extensively researched and applied to many fields,which can be used in contemporary statistics,signal processing and machine learning,such as:signal denoising,compressed sensing,sparse linear regression,high dimensional multinomial classification,variable selection and feature extraction,etc.Such problems can be effectively solved by the proximal gradient method(PGM).The iteration of this method consists of a gradient step and a proximal step.The rate of convergence is typically sublinear O(1/k)under the assumption that the gradient of differential function is Lipschitz continuous,the linear rate of convergence is still unknown except for some special cases.However,the Lipschitz assumption is usually a restriction in many particular circumstances.Therefore,Many people researched into this work.In this paper,we consider the loss function 's gradient is locally Lipschitz continuous in the problems.We give the following conclusions:The step size through the linesearch is taken bounded,the fully descent of the algorithm and results of the objective function value,finally prove the linear convergence of the PGM base on linesearch.Then,we cosider the problems regularized by the sparse group Lasso function,the error bound around the optimal solution set is proved,thus,the linear convergence is given.Finally,we solve the specific optimization problem,the preliminary experimental results support our theoretical analysis.This article is arranged as follows:The first chapter introduces the research significance of the optimization problem,the progress of PGM and the main content of this paper;The second chapter presents the proximal gradient algorithm,moreover analyzes the convergence and complexity under the assumption that the gradient of loss function is locally Lipschitz continuous;The third chapter considers the problems regularized by the sparse group Lasso function,the error bound around the optimal solution set is proved without assuming the global Lipschitz continuous on the gradient of the smooth function by taking two lemmas;The fourth chapter analyzes the numerical experimental results and summarizes the full text.
Keywords/Search Tags:nonsmooth convex optimization, proximal gradient method, locally lipschitz continuous, error bound, linear convergence
PDF Full Text Request
Related items