Font Size: a A A

Non-monotone Memory Gradient Methods Research

Posted on:2019-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:Y Z LiFull Text:PDF
GTID:2370330569979086Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Generally,the conjugate gradient method is an effective algorithm for solving large-scale unconstrained optimization problems.It can avoid the computing and storage of matrix.Similarly,the memory gradient method is also no need to compute and store the matrices.Furthermore,compared with the conjugate gradient method,it enjoys more promising in constructing stable and fast algorithm by increasing more freedom in parameter selection.The algorithm is simple and suitable for solving large-scale unconstrained optimization problems.In this paper,we propose a variety of the non-monotone memory gradient methods and prove the feasibility and convergence of the algorithm.Firstly,we apply the non-monotone form R_k to the improved Armijo line search,and combine with a memory gradient method to obtain a new non-monotone memory gradient method,and then we prove its global convergence.Secondly,the non-monotone form f(x_k)+?_k is applied to the Wolfe line search and combining it with another memory gradient method,a new non-monotone memory gradient method is then constructed.The global convergence property is also investigated.Thirdly,based on the trust region algorithm,we proposed a new memory gradient algorithm by combining the non-monotone formsR_k.A self-adaption skill is also involved in order to reduce the number of resolving sub-problems and avoid a large amount of calculations.Finally,we give some summaries and further research prospects.
Keywords/Search Tags:Unconstrained optimization, Memory gradient method, Trust region method, Global convergence, Non-monotone line search
PDF Full Text Request
Related items