Font Size: a A A

Hybrid Conjugate Gradient Algorithm For Solving Unconstrained Optimization Problems

Posted on:2010-01-20Degree:MasterType:Thesis
Country:ChinaCandidate:H YanFull Text:PDF
GTID:2190360275464982Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
In this paper,we mainly study hybrid conjugate gradient methods for unconstrained optimization.Conjugate gradient method is a kind of conjugate direction methods.It is a type of method between steepest descent method and Newton algorithm which needs only the dervative of the function and overcoming slow convergence features of steepest descent method.It successfully avoids the second derivative message which is necessary of storaging and calculating Newton methods.For minimization of quadratic function,conjugate direction method has property of second termination and fast convergence speed for normal function. The most typical conjugate direction method is conjugate gradient method and its basic idea:combining conjugate property with steepest descent method,we can get the minimization point of objective function in conjugate directions structured at known points.The key features of conjugate gradient methods which we often use are that they require no matrix storage and these algorithms are easy.It is one of the earliest known techniques for solving large-scale optimization problems. Some especially large problems in oil exploration,atmospheric modeling, aerospace and so on are always solved by conjugate gradient methods.The linear conjugate gradient method is a type of conjugate gradient methods,it can be used minimize normal differentiable function if it be improved.Theories in this aspect are perfect expect for abnormal state problem.The nonlinear conjugate gradient method was introduced by R.Fletcher and C.Reeves in 1964 who popularized Hestenes and Stiefel's conclusion for normal nonlinear optimization problem.So formulas got from strictly convex quadratic function can be used in normal non-quadratic function.Touati-Ahmed and Storey firstly proposed a hybrid PRP-FR method, combing PRP method which has good numerical performance with FR method which has nice global convergence.In 2001,Y.H.Dai and Y.Yuan discussed a hybrid DY-HS method,and got the global convergence property with Wolfe line search,In 2004 Z.F.Dai and L.P.Chen proposed a newβ_k,so we got a new hybrid HS-DY method.This method need no descent condition and it satisfies the global convergence with standard Wolfe line search.Zhang and Zhou made a little modification to the FR method and DY method,new methods have an important property that the search directions satisfy g_k~Td_k=-‖g_k‖~2,which is independent of any line search used,moreover new methods reduce to the FR method and the DY method respectively if exact line search is used.Based on the preceding work,we propose two types of hybrid conjugate gradient methods.The frame of this artical is as follows:In chapter 1,firstly,we introduce the development of optimization and some extensive optimality conditions which decide the optimum solution.Secondly,We review some commonly used line searchs and derivative descent methods for unconstrained optimization problems and theirs research works.In chapter 2,we propose a new formulaβ_k for unconstrained optimization,which is the hybrid from HS method,LS method and CD method.And we use a direction which is different from traditional d_k. The direction satisfies descent conditions naturally.And d_k~Tg_k=-‖g_k‖~2 depends neither on the line search used nor on the convexity of the objective function. Under suitable conditions,we prove the new method can ensure the global convergence. Numerical experiments show that the algorithm is efficient.In chapter 3,A new hybrid conjugate method for unconstrained optimization is presented. which makes hybrid conjugate gradient methods of Touati-Ahmed's and Nocedal-Gilbert's to be special cases under precise line search.From the construction of the new formulaβ_k.New algorithm satisfies descent conditions naturally.And d_k~Tg_k=-‖g_k‖~2 depends neither on the line search used nor on the convexity of the objective function.Under normal conditions,we prove the new method can ensure the global convergence.Numerical results also show its efficiency.
Keywords/Search Tags:Unconstrained optimization, Conjugate gradient method, Line search, Global convergence
PDF Full Text Request
Related items