Font Size: a A A

A Class Of Descent Nonlinear Conjugate Gradient Methods

Posted on:2012-05-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y TaoFull Text:PDF
GTID:2230330374496231Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Nonlinear conjugate gradient methods are very efficient for solving optimization problems. An attractive advantage of the conjugate gradient methods is their lower storage and good convergence property. Consequently, they are particularly welcome in the solution of large scale optimization problems. However, most standard conjugate gradient methods are not descent methods. Some of them are descent methods. However, the descent property strongly depends on the line search used.In recent years, the study in the descent conjugate gradient methods has received much attention. There have developed various descent conjugate gradient methods that possess good convergence properties and numerical performance. In this thesis, we further study descent conjugate gradient methods based on the modified Fretcher-Reeves (MFR) method and the modified Polak-Ribiere-Polyark (MPRP) method. We do not propose new method but study the class of conjugate gradient methods formed by the convex combination of the MFR method and the MPRP method. This class of conjugate gradient methods includes the MFR method and the MPRP method as special cases.In Chapter2, we study the properties of the class of the conjugate methods. We will show that this class of methods enjoys the same nice properties as those of the MFR method and the MPRP method.(1) The methods generate sufficient descent directions for the objective function. This property is independent of the line search used.(2) If exact line search is used, the methods possess quadratic termination property.In Chapter3, we study the convergence of the class of methods under different line searches. We first show that if Armijo type line search is used, then the methods are globally convergent when used to minimize a general nonconvex function. We then show that when applied to solve an unconstrained optimization problem with uniformly convex objective function, if Wolfe line search is used, then the sequence generated by each of the methods converges to the unique solution of the problem.Finally, we do extensive numerical experiments to test the class of methods. We first test the methods with Armijo line search. We test the performance of the members in the class with different parameters. We then compare the performance of one of the method in the class with the MFR method and the MPRP method.
Keywords/Search Tags:Unconstrained Optimization, Nonlinear Conjugate Gradient Methods, GlobalConvergence
PDF Full Text Request
Related items