Font Size: a A A

Double Algorithms For L1-Regularized Optimiation Problems

Posted on:2013-10-22Degree:MasterType:Thesis
Country:ChinaCandidate:H ZhuFull Text:PDF
GTID:2230330371489360Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Finding sparse solutions to under-determined linear systems of equations has inten-sively involved in felds of machine learning, signal processing, compressive sensing, linearinverse problems and statistical inference. Generally, this task can be realized by solving1-norm regularized minimization problems. However, the resulting problem is challengingdue to its non-smoothness of the regularization term. In this thesis, we use the conjugategradient methods and the alternating direction methods respectively to solve large scalesparse optimization problems in compressive sensing. Numerical experiments are alsoreported, which illustrate that the proposed methods are practical and promising.In the frst chapter, we give the defnition of the compressive sensing and sparse opti-mization, and list the recent progress of the algorithms in sparse optimization. Moreover,some important notations and symbols which used in the context are included.In chapter2, Inspired by Nesterov’s smoothing technique, we propose a modifedPolak-Ribi`ere-Polyak(PRP) conjugate gradient method to solve1-norm least squareproblem for sparse signal restoration. The per-iteration cost of the proposed algorithmis dominated by three matrix-vector multiplications and the global convergence is guar-anteed by results in optimization literature. Moreover, the algorithm is accelerated bya continuation technique as usual. The limited experiments show that this continua-tion technique benefts to its performance. Numerical experiments further illustrate thatthe proposed algorithm performs better than NESTA—a recently developed code withNesterov’s smoothing technique.In chapter3, we propose two primal and dual versions of the alternating directionalgorithms for the sparse signal reconstruction from its major noise contained observationdata. The algorithm minimizes a convex non-smooth function consisting of the sum of1-norm regularization term and1-norm data fdelity term. We minimize the correspond-ing augmented Lagrangian function alternatively from either primal or dual forms. Bothof the resulting subproblems admit explicit solutions either by using a one-dimensionalshrinkage or by an efcient Euclidean projection. The algorithm is easily implementableand it requires only two matrix-vector multiplications per-iteration. The global conver-gence of the proposed algorithm is established under some technical conditions. The extensions to the non-negative signal recovery problem and the weighted regularizationminimization problem are also discussed and tested. Numerical results illustrate that theproposed algorithm performs better than the start-of-the-art algorithm YALL1.Finally, we list some concluding remarks and further research topics in the last chap-ter.
Keywords/Search Tags:compressive sensing, sparse optimization, conjugate gradient method, alternating direction method, dual problem
PDF Full Text Request
Related items