Font Size: a A A

Parallel Splitting Methods For Separable Convex Optimization

Posted on:2017-03-20Degree:MasterType:Thesis
Country:ChinaCandidate:L LuoFull Text:PDF
GTID:2180330485970489Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
The separable convex optimization problem with linear constraints can be spotted in image processing, signal de-noising and other fields. How to solve this kind of problem has attracted wide attention of many authors. To solve the separable convex optimization problems, alternating direction multiplier method(ADMM) is one of the most effective ways. But only when the objective functions are strongly convex, the convergence of ADMM can be proved in solving the problems with sum of three or more than three individual functions. Many authors have proposed to obtain the predictor by alternating direction multiplier method-like techniques and then obtain the iteration points after one more correction step. In order to get the predictor more easily, we propose two new splitting methods to solve the convex optimization problem with the sum of three individual functions, which means that we utilize the linearized proximal alternating direction method in the prediction step and utilize it after the parallelling method under the prediction-correction framework. In the correction step, we try to minimize the correction in order to keep the advantages of ADMM, i.e., we do not correct the first two variables and just correct the last variable using the last point and the predictor. At last, we proposed a method which is called an extension of pre-correction proximal multiplier method to solve the sum of n individual functions.In this thesis, the first algorithm is based on partial parallel splitting augmented Lagrange method in [19]: “Han el al, A Partial Splitting Augmented Lagrangian Method for Low Patch-Rank Image Decomposition, J Math Imaging Vis, 2015, 51: 145-160”, which parallels a part of the variables in the prediction step. To get predictors relatively easily, we make the augmented items linearized in the prediction step. Based on the method in [28]: “He B.S., Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities, Comput. Optim. Appl., 2009, 2(42): 195–212”, the second algorithm has the augmented items linearized and adds proximal items. In order to retain the advantages of the alternating direction multiplier method, the correction is minimized, thus the correction of the method in [19] is used. The convergence of the two new methods can be proved under some weaker conditions. To illustrate the feasibility and the advantage of optimal value, time consuming and the number of iterations of the two new algorithms, some numerical examples are given. The difference between the first and the second algorithm is that a part or all of the variables are in parallel. In the fourth chapter the pre-correction proximal multiplier method in [6]: “Chen, G. and Teboulle, M., A proximal-based decomposition method for convex minimization problems. Mathematical Programming, 1994, 64(1-3): 81–101” is extended to solve the sum of n individual functions. We get the new algorithm and its convergence is established. The effectiveness of the new algorithm is shown by some numerical examples. Compared with another two methods, the algorithm in the fourth chapter has the advantage in time consuming and the number of iterations as illustrated by the numerical examples.
Keywords/Search Tags:Separable convex optimization, alternating direction method, linearized method, proximal point multiplier method, parallel splitting method
PDF Full Text Request
Related items