Font Size: a A A

Nonlinear Optimization Problems, A New Hybrid Conjugate Gradient Algorithm

Posted on:2008-09-17Degree:MasterType:Thesis
Country:ChinaCandidate:X Y ZhuFull Text:PDF
GTID:2190360212487974Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Optimization is a strong discipline in applied mathematics. With the development of computer and demand of practical issues need, large-scale optimization problems have been given more and more attention. Thus, finding effective algorithms in nonlinear optimization is becoming a very heated research topic. Quasi-Newton algorithm and conjugate gradient algorithm are both effective algorithms. In this paper, we mainly investigate the conjugate gradient algorithm. In 1964, Fletcher and Reeves proposed the conjugate gradient method for solving the minimum of unconstrained optimization min f(x) which was put forward independently by Hesteness and Stiefelto solve linear equations. Since then, the conjugate gradient method began to develop. In paper [5], Powell proved the global convergence of Fletcher-Reeves method with the exact line search. In 1985, Al-baali proved the global convergence of FR method with the non-exact line search: Strong Wolfe line search. In 1969, Polak, Ribiere and Polyak proposed PRP method. PRP and HS methods are more better methods in numerical experiments, but in paper [5], Powell proposed that PRP could not converge globally even under exact line search. In recent years, many specialists such as Han Jiye, Yuan Yaxiang, Dai Yuhong acquired a lot of excellent conclusions. In 1995, Dai Yuhong and Yuan Yaxiang proposed DY method and proved its global convergence. And in 1996, Dai and Yuan proposed the generalized Wolfe line search, and therefore reached the extended line search. They also proved the global convergence of FR under this extended line search in paper [12]. In 1997, L.Grippo and S.Lucidi proposed Grippo-Lucidi line search and proved the global convergence of PRP method under this line search. In 1990, Touati Ahmed and Storey firstly introduced mixed conjugate gradient algorithm, and combined the convergence and numerical performance of conjugate gradient method based on FR and PR. Gilbert and Nocedal studied mixed conjugate gradient algorithm deeply and made quantitative numerical experiments . In recent years, there are many new selections of β_k .Li Rongsheng proposed a class ofnew conjugate gradient methods in paper [16], and proved the global convergence under the Wolfe line search, but it's not satisfactory in numerical experiments. In order to find a new method that has more effective numerical performance, we improve the method, and prove the convergence under the Wolfe line search. Meantime numerical experiments demonstrate its effectiveness.In chapter 1, we first briefly introduce the development of optimization and some extensive optimality conditions to decide the optimum solution. We review several extensive derivative descent methods of unconstrained programming.In chapter 2, we propose a new conjugate gradient algorithm and prove the global convergence under the Wolfe line search. Moreover, the subsequent results of numerical experiments are described more effectively than traditional algorithms.In chapter 3, we introduce a generalized Wolfe line search. On the base of Chapter 2, we prove the global convergence of the method under this generalized line search, and compare the results of numerical experiments with that in Chapter 2.
Keywords/Search Tags:Unconstrained optimization, Conjugate gradient method, Inexact Line search, Global convergence
PDF Full Text Request
Related items