Font Size: a A A

First Order Algorithms For Saddle Point Problems And Constrained Optimization

Posted on:2016-08-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y F YouFull Text:PDF
GTID:1220330461960236Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
In the thesis, we focus on designing first order algorithms for solving two classes of problems, i.e., saddle point problem and structured constrained optimization. These problems arise frequently in different areas, such as image restoration, high dimen-sional statistical, dynamical systems, economics, and physics. All methods proposed in the thesis are based on several already existing first-order algorithms, e.g. augment-ed Lagrangian method (ALM), alternating direction method of multipliers (ADMM), and proximal point algorithm (PPA).In Chapter 1, we briefly introduce the existing methods for saddle point problem and structured constrained optimization, and the motivation to study these problems. In Chapter 2, we introduce some important definitions, i.e., convex function, monotone operator, projection, and variational inequality, and also show the relationships among them. The following is the main body of this thesis, which is divided into two parts. The first part, including Chapter 3 and Chapter 4, discusses saddle point problems and its related methods. The primal-dual hybrid gradient algorithm (PDHG) is a popular method for solving a saddle point problem, with particular applications for some basic image processing models. In the literature, PDHG’s convergence was established only under some restrictive conditions on its step sizes. In Chapter 3, we revisit PDHG’ s convergence in the context of a saddle-point problem and try to better understand how to choose its step sizes. More specifically, we show by an extremely simple example that PDHG is not necessarily convergent even when the step sizes are fixed as tiny constants. We then show that PDHG with constant step sizes is indeed convergent if one of the functions of the saddle-point problem is strongly convex, a condition that does hold for some variational models in imaging. With this additional condition, we also establish a worst-case convergence rate measured by the iteration complexity for PDHG with constant step sizes. Furthermore, in Chapter 4, if strongly convex condition does not hold but one of the functions of the saddle-point problem is linear, we also design a class of efficient methods based on PPA and contraction methods. Numerical results are provided to verify that the new methods are effective for some practical problems.The second part, consisting with Chapter 5 and Chapter 6, concentrates on decom-position methods for structured constrained optimization. Although many structured constrained optimization problems could be transformed into equivalent saddle point problems by the theory of Lagrange multipliers, applying mechanically the methods proposed in the first part may not work well since that ignores the structure of ob-jective functions and may lead to difficulty in solving subproblems. ALM in [7] is a classical method for solving constrained optimization. In this part, we consider how to modify ALM to solve constrained optimization with special structure and try to obtain some theoretical results. In Chapter 5, we consider the linearly constrained convex programming problem with 3 blocks and firstly prove the convergence of ADMM with three blocks under new conditions which are much weaker than those assumptions in [46]. Moreover, in order to accelerate the ADMM with three blocks, we also pro-pose a relaxed ADMM involving an additional computation of optimal step size and establish its global convergence under mild conditions. In Chapter 6, we consider the -regularized optimization problems with orthogonality constraints. Such optimiza-tion problems are difficult to solve as there exist both non-smooth terms in objective function and non-convex constraints. Combining the idea of ALM [2] and the idea of the proximal alternating minimization techniques proposed in [5], we design a method for a class of L1-regularized optimization problems with orthogonality constraints and prove the proposed method has the sub-sequence convergence property.
Keywords/Search Tags:Convex programming, augmented Lagrangian method, alternating direc- tion method of multipliers, proximal point algorithm, l1-regularization, orthogonality constraints
PDF Full Text Request
Related items