Font Size: a A A

Study On A Class Of Parallel Primal-dual Methods And Their Applications On Saddle-point Problems

Posted on:2018-07-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:X Y ZhaFull Text:PDF
GTID:1310330542468404Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
This dissertation addresses using parallel primal-dual methods to solve saddle-point problems.A primal-dual method updates both primal and dual variables in each iteration.It is thus able to avoid some of the difficulties that arise when working only on the primal or the dual side.For example,for TV minimization,gradient descent applied to the primal functional has trouble where the gradient of the solution is zero because the functional is not differentiable there.Chambolle's method works on the dual very effectively for TV denoising,but it does not easily extend to applications where the dual problem is more complicated,such as TV deblurring.Primal-dual algorithms can avoid these difficulties to some extent.Primal-dual methods can also avoid the auxiliary variable introduced by the popular alternating direction method of multiplier.These methods do not require to invert any matrix,but only to apply a linear operator and its adjoint.This advantage is of main interest when large-size problems have to be solved for which the inverse does not exist or the computation is too costly.The primal-dual hybrid gradient method is a classic way to tackle saddle-point problems.Nevertheless like most other primal-dual methods,some restrictions on the step size parameters,e.g.,???1/||AT A||,are impose to guarantee convergence.As a result,the expensive calculation of ||AT A|| is unavoidable.In this thesis,we propose a class of parallel primal-dual methods for the saddle-point problems.These methods updates the primal and dual variable in parallel and their convergence can be accelerated especially parallel computation techniques are exploited.Unlike most primal-dual methods,all our methods(except the first method)no longer have any restriction on parameters or hence require no information of ||A||.This is important since the evaluation of ||A|| can be very expensive with growing prob-lem scale.We first use these method to tackle two-block saddle-point problems.Later we extend our work to special three-block saddle-point problems where one function has a Lipschitz continuous gradient.We use Variational Inequality as tool to study the methods and its convergence.The new methods all converge at the rate of O(1/t)in er-godic sense.In addition,the first method converges at the rate of O(1/t)in non-ergodic sense.
Keywords/Search Tags:Convex programming, primal-dual algorithm, saddle-point problem, O(1/t)convergence rate
PDF Full Text Request
Related items