Font Size: a A A

The Study Of Models And Algorithms Of Non-Convex Piecewise Quadratic Approximation For Sparse Optimization

Posted on:2019-08-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q LiFull Text:PDF
GTID:1360330548485779Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
In recent years,a large number of sparse optimization problems have emerged in the areas of signal and image processing,compressed sensing,statistics,machine learn-ing,and data mining,in which the optimized variables have some kind of sparse struc-ture.Using a sparse structure not only makes it possible to reconstruct high-dimensional signals from a small amount of data,more importantly,it can greatly speed up the com-putational speed of large-scale optimization problems.The main idea of solving the sparse optimization problem is relaxation,but non-convex relaxation can generally ob-tain higher quality sparse solutions than convex relaxation.With the rapid development of non-convex optimization algorithms,how to use non-convex relaxation to quickly and accurately solve sparse optimization problems has become a hot research direction.In the age of big data,due to the increasing need to solve large-scale problems,the first-order algorithm represented by the accelerated gradient algorithm has attracted much interest recently.Naturally,the accelerated gradient algorithm has been extended to solve non-convex optimization problems,the convergence of the algorithm has been proved.Although many practical numerical experiments demonstrate the superiority of the accelerated gradient algorithm,but there is no theoretical results show that for solving non-convex optimization,the accelerated gradient algorithm is better than non-accelerated gradient algorithm.Therefore,how to break the theoretical bottleneck of the accelerated gradient algorithm in solving non-convex optimization is a challenging problem.This thesis aims to propose new non-convex relaxation,which can provide a better sparse solution for sparse optimization problems.By exploiting the special structure of the model to design the algorithm and the accelerated algorithm,we aim to obtain the theoretical breakthrough of the accelerated algorithm in solving non-convex optimization problems.The main results and innovations of the thesis are described as follows.By using the least absolute residual approximation,we propose a new piecewise quadratic function to approximate the L0 norm.Then,we develop a piecewise quadratic approximation(PQA)model where the objective function is given by the summation of a smooth non-convex component and a non-smooth convex component.To solve the PQA model,we present an iterative gradient algorithm based on the idea of iterative thresholding algorithm and derive the convergence and the convergence rate.Finally,we carry out a series of numerical experiments to demonstrate the performance of the proposed algorithm for PQA model.We also conduct a phase diagram analysis to further show the superiority of PQA over L1 and L1/2 regularizations.To solve the PQA model,we improve the Ghadimi and Lan's accelerated gradient(AG)algorithm,which is designed for nonconvex and nonsmooth optimization.While sharing the same iteration complexity bound,the proposed new AG algorithm requires less computational cost when compared with the original AG algorithm.We provide a series of numerical experiments to demonstrate the performance of the improved AG algorithm and the iterative gradient algorithm for PQA.Numerical experimental results show that the improved AG algorithm is superior to the iterative gradient algorithm when solving large-scale problems.The piecewise quadratic approximation regularization is builded to solve the linear inverse problem in sparse optimization.Through developing a threshoding representa-tion theory for PQA regularization,we propose an iterative PQA thresholding algorithm(PQA algorithm)for solving PQA regularization.The PQA algorithm converges to a local minimizer of the regularization,with an eventually linear convergence rate.Furthermore,we extend the idea of accelerated gradient algorithm to develop the accelerated iterative PQA thresholding algorithm(APQA algorithm)and establish the improved linear convergence rate of the APQA algorithm.We carry out a series of numerical experiments to demonstrate the performance of the proposed algorithms for PQA regularization.The APQA algorithm is proven to be significantly better than the PQA algorithm,both theoretically and practically.
Keywords/Search Tags:Sparsity optimization, non-convex approximation, accelerated gradient algorithm, iterative thresholding algorithm, phase diagram research
PDF Full Text Request
Related items