Font Size: a A A

Error Bound-Based Convergence Analyses of Certain Classes of Algorithms for Structured Optimization Problems in Machine Learning and Signal Processin

Posted on:2018-02-23Degree:Ph.DType:Dissertation
University:The Chinese University of Hong Kong (Hong Kong)Candidate:Yue, Man ChungFull Text:PDF
GTID:1440390002450923Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Besides being an interesting subject in convex analysis and variational analysis, the concept of error bound also plays an important role in the study of optimization algorithms. Specifically, it has been utilized to establish the asymptotic convergence rate of various first- and second-order algorithms. The advantage of error bound-based analysis is that it does not require strong convexity nor the non-degeneracy of the optimal solutions, making it the sought-after theory for high-dimensional optimization problems in modern applications which often lacks strong convexity. The primary goal of this study is to extend the class of optimization algorithms that allows an error bound-based analysis and to lay down the theoretical foundation of these algorithms.;We first consider the class of convex composite minimization which has a wide range of applications in machine learning, statistics, compressed sensing, signal processing, etc. After reviewing the Luo-Tseng framework for proving linear convergence of the proximal gradient method, we turn our attention to the proximal Newton method. As our first contribution, we propose a new family of proximal Newton methods for solving the convex composite minimization problems and establish its global convergence and asymptotic convergence rate. Our analysis is novel and is based on the classical Luo-Tseng error bound.;We then study the cubic regularization method for smooth non-convex minimization problems. Polyak and Nesterov proved under some assumptions that the algorithms converges to set of second-order critical points, and the local convergence rate is quadratic if a certain non-degeneracy condition is further assumed. We remove the non-degeneracy assumption in the convergence rate result by developing a new convergence rate analysis based on a certain error bound condition, thus generalizing the convergence rate result to a much broader class of problems. A proof of concept study with the phase retrieval problem will be carried out.;Finally, we consider the angular synchronization problem, which admits a natural non-convex quadratically constrained quadratic optimization formulation. A popular but computationally expensive approach to the optimization formulation is the convex semidefinite relaxation. An alternative low-cost approach is to directly solve the non-convex problem by the generalized power method. Remarkably, we show that, under the same noise power assumption as the semidefinite relaxation approach, the generalized power method converges to a global optimum at a linear rate. At the heart of our analysis is a new error bound inequality for the non-convex problem. Our result also settles a recently proposed open question about the rate of convergence of the generalized power method.
Keywords/Search Tags:Convergence, Error bound, Problem, Generalized power method, Algorithms, Optimization, Rate, Class
PDF Full Text Request
Related items