Font Size: a A A

Lagrangian Methods For Sparsity Constrained Optimization

Posted on:2021-07-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:C ZhaoFull Text:PDF
GTID:1480306560986349Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Sparsity constrained optimization is a class of optimization problems involving s-parsity constraint.It has wide applications arising in many fields including signal and image processing,machine learning,economics and statistics.Over the past ten years,sparse optimization has attracted much attention and has become a hot research topic,with an accumulation of fruitful research achievements.Due to the combinatorial prop-erty of l0 norm,sparsity constrained optimization is nonconvex,discountinuous and NP-hard.Speaking in general terms,continuous optimization theory and algorithms are usually unable to be applicable to a such problem.However,the special structure provides us with an interesting and challenging research topic.In this thesis,we study the Lagrangian theory and algorithms for sparsity constrained optimization,including the Lagrangian dual theory,optimality conditions and Lagrange-Newton Algorithm and so on.For the spare linear programming,by rewriting the sparsity constraint into a dis-junctive form,we present an explicit formula of Lagrangian dual problem for the the spare linear programming,in terms of an unconstrained piecewise-linear convex pro-gramming problem which admits a strong duality under bi-dual sparsity consistency.Furthermore,we show a saddle point theorem based on the strong duality and analyze two classes of stationary points for the saddle point problem.For the sparse nonlinear programming,we establish a first-order optimality condi-tion based on the concept of strong ?-Lagrangian stationarity via the Lagrangian func-tion,and reformulate it as a system of nonlinear equations called the Lagrangian equa-tions.Then,the nonsingularity of the corresponding Jacobian and other mathematical properties are discussed.Furthermore,based on these theoretical analysis,a Lagrange-Newton algorithm(LNA)is proposed.Under some conditions,we establish the local quadratic conver-gence rate and the iterative complexity estimation of LNA.To further demonstrate the efficiency and superiority of our proposed algorithm,we apply LNA to solve three spe-cific application problems arising from compressed sensing,sparse portfolio selection and sparse principal component analysis,in which significant benefits accrue from the restricted Newton step in LNA.
Keywords/Search Tags:Sparsity constrained optimization, Lagrangian function, Duality theory, Stationary point, Newton method
PDF Full Text Request
Related items