Font Size: a A A

On Global Convergence Of Conjugate Gradient Method

Posted on:2009-03-29Degree:MasterType:Thesis
Country:ChinaCandidate:L HongFull Text:PDF
GTID:2120360245967688Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Conjugate Gradient Method is an effective method to solve unconstrained optimization problems. Due to its simplicity and its very low memory requirement, the conjugate gradient (CG) method has played a special role in solving large-scale unconstrained nonlinear optimization problems. The article mainly makes a deep research into conjugate gradient methods, and puts forward some new conjugate gradient methods and discusses their convergence.In Chapter 1, we briefly introduce the foundational knowledge and some famous achievements of nonlinear conjugate gradient methods, and propose our research content.In Chapter 2, based on the new Armijo-type line search by Wei, Li and Qi, and the new conjugate gradient formulaβk* by Wei, Yao and Liu, we propose a new conjugate gradient method and prove that the method is globally convergent.In Chapter 3, based on the class of new quasi-Newton equations B<sub>k+1Sk = yk* = yk + Ak(3)sk by Wei Zengxin and some other researchers, in which Ak(3) is a positive define matrix, we give a modified conjugate gradient method. The modified type is globally convergent if the restricted Wolfe-Powell line search is used. Primary numerical results show the method is promising.In Chapter 4, two new classes of conjugate gradient formulas for solving unconstrained optimization problem are proposed, the new formulas satisfy the sufficient descent condition, and the corresponding algorithm under the Wolfe-Powell condition is global convergent. Primary numerical results are encouraging.
Keywords/Search Tags:Unconstrained optimization, Conjugate gradient method, Line search condition, Global convergence
PDF Full Text Request
Related items