Font Size: a A A

The Convergence Research On Conjugate Gradient Method

Posted on:2013-06-05Degree:MasterType:Thesis
Country:ChinaCandidate:Y ChenFull Text:PDF
GTID:2230330374476697Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Optimization method is an important part of operations research,it has wide application to many fields, such as, natural science, practical production, engineering design and modern management, etc. Many practical problems can be reduced to optimization problems. The key to investigating optimization is to design efficient algorithm for solving it. There are many ways that can be used to distinguish the optimization method. According to the values of decision variables, it is divided into discrete optimization and continuous optimization. According to the function is continuously differentiable or not, it is divided into smooth optimization or non-smooth optimization. It is divided into linear optimization or non-linear optimization according to the function variable’s quality.Unconstrained optimization problems is the basic of optimization problems, usually iterative method is used for its optimal solution. These commonly methods, including steepest descent method, Newton’s method, quasi-Newton method, conjugate gradient method, etc, are used to solve unconstrained optimization problems. Steepest descent method has the advantages of small storage, simple structure, easy to realize and good global convergence, but its convergence rate is so slow that it only has a local linear convergence rate in theory. Newton’s method has been widespread public concerned because of its fast local convergence rate, it has a second-order convergence rate that makes the algorithm popular. However, Newton’s method needs to calculate the second derivative matrix of the objective function, and when the Hesse matrix▽f2(xk) is singular, the Newton direction may not exist, or exist but not at the direction of decline. Quasi-Newton method overcomes the defect of Newton’s method, it only need to calculate the first order derivative of the objective function, and the majority of quasi-Newton methods are descent algorithm and have good global convergence and super linear convergence rate. The advantages of the method make the algorithm popular. However matrix storage is required at each iteration and solving a system of linear equations is required to determine the direction of decline, the coefficient matrix of the equations is usually dense and so Quasi-Newton method is not suitable for solving large-scale issue.Conjugate gradient method has been more than50years of history. It is the first introduced by Hestenes and Stiefel in1952in solving linear equations and is to be promoted non-linear optimization by Fletcher and Reeves in1964. Subsequently Beale, Powell, Fletcher, and other well-known expert on non-linear optimization has conducted in-depth research on conjugate gradient method and has achieved very abundance results. But almost at the same time Quasi-Newton method of calculation came out, because of its good performance and quick convergence property will soon be a favor, resulting in a very long time, conjugate gradient method was neglected by researchers. In recent years, with the rapid development of computer and the needs of practical problems, large-scale optimization problem has drawn more and more attention. The conjugate gradient method for large-scale problem is a major way. Thus, conjugate gradient method of theoretical research should be of concern by people. The Conjugate gradient method can be not only extensively applied to the science, engineering, economy and management system, but also to the governments’ decision-making, manufacturing management, transportation and military and national defense.In this paper, the conjugate gradient method has been on the basis of the results of the conjugate gradient method for more discussions, the paper also propose several new conjugate gradient methods and establish some convergence of the results. The main results obtained in this dissertation may be summarized as follows:The first chapter introduces the relevant concepts of non-linear conjugate gradient method and several common solutions to the optimization problem, and introduces the major results obtained in this paper.The second chapter generalizes and reduces the present research of non-linear conjugate gradient method, which is concerned by people in home and abroad.The third chapter establishes the theoretical analysis on the global convergence of the non-linear conjugate gradient method which is proposed in literature under the new inexact line search which is proposed in literature, and the global convergence was established under general assumptions.The forth chapter gives a kind of new non-linear conjugate gradient method of unconstrained optimization problems, and established the sufficient descent property and the globally convergent of this new algorithm under the new inexact line search which is proposed in literatureThe fifth chapter establishes a new theoretical analysis on the global convergence of the non-linear conjugate gradient method which is proposed in literature under the new inexact line search which is proposed in literature.And then gives the proof of sufficient descent property and the convergence.
Keywords/Search Tags:conjugate gradient, inexact linear search, global convergence property
PDF Full Text Request
Related items