Font Size: a A A

SG-VU Algorithm For Nonsmooth Convex Minimization

Posted on:2022-08-20Degree:MasterType:Thesis
Country:ChinaCandidate:H ShiFull Text:PDF
GTID:2480306563977959Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Nonsmooth optimization has always been a hot topic in the optimization field,which is widely used in various disciplines such as image restoration,optimal con-trol,variable selection,stochastic equilibrium,signal reconstruction and economic man-agement.However,the complicated structure and characteristics of nonsmooth functions make it difficult to get the second-order expansion directly.The second-order informa-tion is related to the convergence of the corresponding optimization algorithms,which makes it difficult for the optimization algorithms for such problems to have high-order convergence.In recent years,the complex large-scale nonsmooth optimization problem is widely used in practical problems,which makes the efficiency and convergence of op-timization algorithms more important,with the rapid development of computers.There-fore,how to design an optimization algorithm with high-order convergence and suitable for the large-scale problem has become the focus of many scholars.In this paper,the algorithm for a class of nonsmooth convex optimization is mainly studied,and the feasibility and efficiency of the algorithm are verified by a series of numerical experiments.The specific research contents are as follows:(1)To solve a class of convex nonsmooth optimization problems,a VU algorithm(SG-VU)based on the smoothing gradient method is designed,which mainly carries out orthogonal steps alternately:V step and U step.In V step,the smoothing gradient method is used to generate a sequence of proximal points to approximate points on the primal track,so as to achieve the purpose of finding the minimum in V space,and points on the dual track are approximated by simple calculation,which also avoids the problem of solving optimization problems twice in V step;in U step,the U-subgradient and U-Hessian matrix obtained in V step are used to perform the U-Newton step to get the minimum in U space and accelerates the algorithm.By introducing a simple line search into the design,the algorithm ensures that every complete iteration is fully reduced.(2)A comprehensive convergence analysis of the SG-VU algorithm is carried out.The global convergence of the algorithm is proved by discussing the stopping conditions of inner loop in V step.Through the characteristics of smoothing method,there is some inequality relationship among the proximal points generated in V step and the real prox-imal points and the approximate dual trajectory points with proof.Furthermore,under some assumptions,it is proved that the the SG-VU algorithm has superlinear conver-gence.(3)Through a series of numerical experiments,it is proved that the SG-VU algo-rithm is better than the bundle-VU and SG algorithm.The SG-VU algorithm can get higher precision in shorter CPU time,and its optimization performance is best.Moreover,for complex large-scale problems,the SG-VU algorithm also has better optimization performance,and the selection of the initial smoothing parameter is more robust.The superlinear convergence of the SG-VU algorithm is verified by observing and analyzing the solving process of the algorithm.Furthermore,the optimization problems proposed in some literatures are further calculated,which proves the feasibility and efficiency of the SG-VU algorithm.
Keywords/Search Tags:VU decomposition, Smoothing method, Superlinear convergence, Proximal point, Nonsmooth Optimization
PDF Full Text Request
Related items