Font Size: a A A

Parallel Proximal Method For A Class Of Nonsmooth Convex Optimization Problems

Posted on:2014-07-11Degree:MasterType:Thesis
Country:ChinaCandidate:J WeiFull Text:PDF
GTID:2250330392973743Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Structured and large-scale convex optimization problems have wide applications in com-pressed sensing, signal processing, image processing, multi-task learning and so on. Manyproblems in signal processing, image recovery, matrix complexity and machine learning can beabstracted into the problem of minimizing the sum of several convex functions in a real space.In this paper, we consider a class of nonsmooth convex optimization problems where theobjective function is the composition of a strongly convex function with a linear mapping, reg-ularized with the mixed norm. On one hand, because of the non-smoothness of the objectivefunction, this class of convex optimization problems cannot be solved by traditional methodsdirectly. However, by proximity operator, algorithms can overcome these difficulties. On theother hand, as the objective function is the sum of three separable convex functions, it has manydifficulties for the forward-backward method currently in use to solve these problems.This paper we propose a parallel proximal algorithm for solving this class of problemsand establish its global convergence. The algorithm fully decomposes the problems in that itinvolves each function individually via its own proximity operator. Finally, this paper also pro-vides a proximal gradient method in a special case and establish its global linear convergencewithout assuming strong convexity of the overall objective function. The convex optimizationproblems, which we study in this paper, can be applied very well in practical problems such ascompressed sensing and image processing and so on.
Keywords/Search Tags:convex optimization, nonsmooth, parallel proximal method, global convergence, linear convergence
PDF Full Text Request
Related items