Font Size: a A A

Convex optimization under inexact first-order information

Posted on:2010-05-16Degree:Ph.DType:Thesis
University:Georgia Institute of TechnologyCandidate:Lan, GuanghuiFull Text:PDF
GTID:2440390002487466Subject:Engineering
Abstract/Summary:
In the first part of this thesis we focus on the general non-smooth convex optimization under a stochastic oracle. We start by introducing the Mirror-descent Stochastic Approximation (SA) algorithm due to Nemirovski et al. (2009) for solving this class of problems. By incorporating two important elements, namely, averaging the iterates and adapting to the problem's geometry, into the classic SA algorithm, this modified SA algorithm can significantly outperform other approaches, such as, the Sample Average Approximation (SAA) for a certain class of convex programming problems. However, some issues related to the mirror-descent SA algorithm remain to be addressed. First of all, a long-standing problem for the SA methods is the absence of a validation procedure to estimate the accuracy of the generated solutions. On the other hand, an important methodological property of the SAA approach is that, with some additional effort, it can provide such estimates. To this end we show that while running a mirror descent SA procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective value.;In the second part of this thesis we consider the Stochastic Composite Optimization (SCO), an important class of convex programming problems whose objective function is given by the summation of a smooth and non-smooth component. Our contribution mainly consists of the following aspects. Firstly, with a novel analysis, it is demonstrated that the simple mirror descent SA algorithm applied to the aforementioned problems exhibits the best known so far rate of convergence guaranteed by a more involved stochastic mirror-prox algorithm. Moreover, by properly modifying a variant of Nesterov's optimal method for smooth convex optimization, we propose an accelerated SA, which can achieve the theoretically optimal rate of convergence for solving this class of problems. Clearly, the accelerated SA algorithm is a universally optimal method for non-smooth, smooth and stochastic convex optimization. It should be stressed that Nesterov's optimal method and/or its variants were designed for solving deterministic smooth convex optimization problems. These algorithms, with very aggressive stepsizes employed, were believed to be too sophisticated to solve non-smooth and stochastic convex optimization problems. We, however, substantially extend the analysis of Nesterov's optimal method to non-smooth and stochastic convex optimization, and devise a novel (actually increasing) stepsize policy for solving these problems. Thirdly, we investigate this accelerated SA in more details, for example, derive the exponential bounds for the large deviations of the resulting solution inaccuracy from the expected one, provided the noise from the stochastic oracle is "light-tailed". Finally, the significant advantages of the accelerated SA over the existing algorithms are illustrated in the context of solving a class of stochastic programming problems whose feasible region is a simple compact convex set intersected with an affine manifold.;In the third part of this work, we investigate certain deterministic optimization technique, namely, the augmented Lagrangian method, applied to solve a special class of convex programming problems. We establish a bound on the total number of Nesterov's optimal iterations, i.e., the inner iterations, performed throughout the entire inexact AL method to obtain a near primal-dual optimal solution. We also present variants with better iteration-complexity bounds than the original inexact AL method, which consist of applying the original approach directly to a perturbed problem obtained by adding a strongly convex component to the objective function of the CP problem. We show that the iteration-complexity of the inexact AL methods for obtaining a near primal-dual optimal solution compares favorably with other penalty based approaches, such as the quadratic and exact penaly methods, and another possible approach for solving variational inequalities. (Abstract shortened by UMI.)...
Keywords/Search Tags:Convex optimization, SA algorithm, Stochastic, Accelerated SA, Method, Inexact AL, Solving, Non-smooth
Related items