Font Size: a A A

A Class Of Nonlinear Optimization Problems Conjugate Gradient Algorithm

Posted on:2008-05-02Degree:MasterType:Thesis
Country:ChinaCandidate:J H ChenFull Text:PDF
GTID:2190360212487980Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Seeking fast theoretical convergence and effective algorithms in unconstrained optimization is a very interested research topic for the optimization specialists and engineers. The favorable numerical experience and theoretics have proved that the steepest decent method is the simplest method when applied to unconstrained optimization but its convergence rate is too slow; It also shows clearly that non-quasi-newton methods are widly considered to be one of the most efficient methods because of its faster convergence rate ;In 1964,Fletcher and Reeves proposed the conjugate gradient method for solving the minimum of unconstrained optimization min f(x) which came from the conjugate gradi-ent method for solving linear equations that Hesteness and Stiefel proposed [1]. In paper [2],Al-baali proved the global convergence of Fletcher-Reeves method and paper [5] developed it to use unexact line search:Strong Wolfe line search ,but because it maybe caused small step the FR method is very bad in numerical experiments .PRP and HS methods are more better methods in numerical experiments ,but in paper [3] Powell proposed that PRP method couid not converge globally even under exact line search ,but it had effective numerical performence[13,19,33].And then many specialists studied global convergence of thoese methods deeply .In 1997,L.Grippo and S.Lucidi proposed Grippo-Lucidi line search and proved the global convergence of PRP method under this line search[27,28].In 1995,Dai yuhong and Yuan yaxiang proposed DY method and studied its global convergence and its property deeply .The convergence of DY method is very well. In paper [10]Dai introduced the global convergence of DY method under the simple line search ,but it is not very good in nunerical ex-periments.In order to find good method that have better convergence and more effective numerical performence, this paper gives a new class of algorithm that can solve unconstrained nonlinear optimization problems.Numerical experiments indicate that this algorithm is very effective.In chapter 1 , we first introduce the development of optimization and some extensive optimality conditions which to decide the optimum solution. We review several extensive derivative descent methods of unconstrained programming.In chapter 2, thinking of expression of general conjugate gradient methods and reasoning of DY method, we propose a new conjugate gradient algorithm and proved that the choice of new parameter can selfly ensure the decent of search direction and global convergence under Wolfe line search .In chapter3, based on the new parameter of chapter 2,we propose anothoer new parameter ,and at same time we prove the decent and global convergence of this conjugate gradient algorithm.
Keywords/Search Tags:conjugate gradient method, inexact line search, unconstrained optimization, global convergence
PDF Full Text Request
Related items