Font Size: a A A

On The Convergence Analysis Of Douglas-Rachford Splitting Method For Nonconvex Optimization Problems

Posted on:2018-12-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:K GuoFull Text:PDF
GTID:1310330518490180Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
In this thesis, we investigate the convergence anlysis of Douglas-Rachford oper-ator splitting method for nonconvex optimization problems. This thesis, constructed by four parts,is organized as follows:In chapters 1 and 2, the backgrounds of this thesis are introduced and some pre-liminaries, which are useful throughout this thesis, are provided.In chapter 3, we consider the convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained nonconvex minimization prob-lems. Essentially, ADMM can be viewed as the Douglas-Rachford splitting method applied to the dual of two block separable convex optimization problem with linear constraints. The efficiency of the classic ADMM has been exhibited by various ap-plications for large scale separable optimization problems, both for convex objective functions and for nonconvex objective functions. While there are a lot of convergence analysis for the convex case, the nonconvex case is still an open problem and the re-search for this case is in its infancy. We consider three problems, that is two-block sep-arable nonconvex optimization problem with linear constraints, multi-block separable nonconvex optimization problem with linear constraints, and linearly constrained non-convex optimization with coupled objective functions. Under the assumption that the corresponding augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz in-equality, we prove that the iterative sequence generated by the ADMM for solving these problems converges to a critical point of the augmented Lagrangian function,provided that the penalty parameter in the augmented Lagrangian function is suffi-ciently large. Under some further conditions on the problem's data, we also analyze the convergence rate of the algorithm.In chapter 4, we consider the convergence of the Douglas-Rachford splitting method (DRSM) for minimizing the sum of a strongly convex function and a weakly convex function; a setting having various applications especially in some sparsity-driven scenarios with the purpose of avoiding biased estimates which usually occur when convex penalties are used. Though convergence of the DRSM has been well studied in the case where both functions are convex, its results for some nonconvex-function-involved cases,including the "strongly + weakly" convex case,are still in infancy. We prove the convergence of the DRSM in the "strongly + weakly" convex setting, under relatively mild assumptions compared with some existing work in the literature. Moreover, we establish the rate of asymptotic regularity of the Douglas-Rachford operator and the local linear convergence rate in asymptotical sense under the metric subregularity assumption.
Keywords/Search Tags:Alternating direction method of multipliers, Kurdyka-Lojasiewicz inequality, nonconvex optimization, linear constraints, global convergence, separable structure, coupled objective function, nonsmooth analysis, Douglas-Rachford splitting method
PDF Full Text Request
Related items