Font Size: a A A

Design And Analysis Of Accelerated Algorithms For Several Classes Of Nonsmooth Sparse Optimization Problems

Posted on:2024-05-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:F WuFull Text:PDF
GTID:1520307376484284Subject:Mathematics
Abstract/Summary:PDF Full Text Request
With the development of the data era,people have higher requirements on the speed of the algorithm for solving sparse optimization problems,and extrapolation is one of the effective methods to improve the convergence rate of the algorithm.In this paper,we design several kinds of discrete algorithms for different sparse optimization problems by using extrapolation techniques,which enjoy simple structure,fast convergence rate and easy implementation,and analyze their convergence behavior.Detailed contents are as follows:1.For a class of constrained sparse optimization problems whose objective function is the sum of two continuous convex functions,we propose a smoothing accelerated proximal gradient algorithm.By designing the updating rule of smoothing parameters,we prove that the iterates generated by the algorithm converge to an optimal solution of the problem and the convergence rate of the corresponding objective function values is O(1nσk/k,where σ∈(1/2,1].In addition,our algorithm only needs to compute the proximal operator of one part of the objective function in the process of implementation.We study the stability of the algorithm with respect to the calculation error of the gradient,and prove that the proposed inexact smoothing accelerated proximal gradient algorithm has the same convergence behaviour as the exact version under the summability condition on the errors.2.For a class of l0 penalized box constrained sparse optimization problems whose loss function is smooth and convex,we propose an accelerated iterative hard thresholding algorithm.Under the condition that the extrapolation coefficients is less than a critical value,we prove that the iterates sequence globally converges to a local minimizer of the problem with the lower bound property,and the function values sequence converges to the corresponding locally optimal value.In the case that the loss function also satisfies the error bound condition,we show that both the iterates and the corresponding objective function values are R linearly convergent.3.For a class of l0 penalized box constrained sparse optimization models whose loss function is nonsmooth and convex,we propose an accelerated smoothing hard thresholding algorithm.The extrapolation coefficients can be chosen to satisfy supk βk=1.First,a sufficient condition is given to ensure that any accumulation point of the iterates is a local minimizer of the problem.Second,for a specific updating scheme of extrapolation coefficients and smoothing parameters,we obtain that the iterates are convergent to a local minimizer of the problem and the convergence rate of the corresponding function values is o(1nσk/k),σ∈(1/2,1].For the case where the loss function is smooth and convex,we propose an accelerated hard thresholding algorithm.We prove that the iterates generated by this algorithm converge to a local minimizer of the problem,which satisfies a desirable lower bound property.Moreover,we obtain that the convergence rate of the corresponding function values is o(k-2).4.For a class of l0 penalized box constrained sparse optimization problems whose loss function is smooth and convex,we propose a extrapolated proximal gradient algorithm with dry friction by discretizing a second order inertial gradient system.According to the structure of the problem,we define a class of ε local minimizers.Based on the construct of the algorithm and the unique properties of dry friction function,we obtain that the proposed algorithm has the following properties.When the dry friction coefficient is larger than 0,we prove that the iterates sequence generated by the algorithm has a finite length and converges to an ε local minimizer of the problem.When the dry friction coefficient equals 0,we establish that any accumulation point of the iterates is a local minimizer of the problem.Furthermore,inspired by Nesterov’s acceleration scheme,we propose a new proximal gradient algorithm with extrapolation by using different discrete methods.We prove that this algorithm still has the convergence properties obtained above.
Keywords/Search Tags:sparse optimization, nonsmooth optimization, accelerated algorithm with extrapolation, sequential convergence, convergence rate
PDF Full Text Request
Related items