Font Size: a A A

On The Convergence Rate Of The Multi-Step Methods

Posted on:2021-05-14Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y LiuFull Text:PDF
GTID:2370330605952825Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the development of computer science and the demand for practical problems,more and more large-scale optimization problems have emerged.The core of solving the large-scale optimization problem is to design an effective algorithm,and it has certain requirements on the calculation amount and communication cost of the algorithm.The multi-step gradient method is an effective algorithm for solving large-scale unconstrained optimization problems because of its simple algorithm,small storage capacity,and small calculation volume.Many algorithms tend to converge slowly when solving large-scale optimization problems.Algorithms with more iterations tend to converge more slowly and have more iterations than centered distributed algorithms.Therefore,how to analyze and improve the convergence speed of the algorithm has become the focus of research.This article first introduces several classic multi-step gradient methods and their research status.Based on the previous research results,this paper proposes a new multi-step gradient method,which further enriches the research of multi-step gradient methods.The main research contents are as follows:This paper presents a new method to prove the optimal convergence rate of the two-step gradient descent method.The method was to directly solve the eigenvalues of the system matrix to obtain the convergence rate of the algorithm.In this paper,tools in the control field such as the Rolls criterion are introduced to directly determine the distribution of the roots without solving the roots of the characteristic equation.Compared with the previous proof methods,this method is more concise and easier to understand.We develop a three-step gradient method for unconstrained optimization of convex functions with Lipschitz-continuous gradients.Given bounds on the Hessian of the objective function,we derive the sufficient and necessary conditions for its convergence.Furthermore,we establish the connection between the convergence rate and the algorithm parameters.Based on this connection,we determine the algorithm parameters that guarantee the fastest convergence for the first time.The results show that the three-step gradient method will degenerate to the two-step gradient method when it reaches the fastest convergence.
Keywords/Search Tags:optimization, unconstrained optimization, gradient descent
PDF Full Text Request
Related items