Font Size: a A A

Conjugate Gradient Methods For Solving Unconstrained Optimization And Nonlinear Equations

Posted on:2015-02-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Y HuangFull Text:PDF
GTID:1220330464468911Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Conjugate gradient methods have the advantages that simple iterations and low memory requirements, so they are the common methods to solve large-scale unconstrained optimization. At present, the study of conjugate gradient methods for solving monotone nonlinear equations has been concerned, and some developments were achieved. Of all the conjugate gradient methods, the conjugate gradient methods with sufficient descent property are often more effective. This thesis studies this type of methods, and uses them to solve the unconstrained nonlinear optimization and nonlinear equations. This thesis starts with the unconstrained differentiable problems to investigate a general form of conjugate gradient methods with sufficient descent property, and develops the conjugate gradient methods that belong to the general frame to solve convex constrained nonlinear monotone equations and non-differentiable convex unconstrained optimization. The major contributions of this thesis are outlined as follows:1. Based on the works of predecessors, a class of conjugate gradient methods is investigated,which satisfies sufficient descent property and belongs to a unified framework. Firstly, theory analysis illustrates that they are strongly convergent whenever the weak Wolfe line search is fulfilled. Then, several specific versions are presented, and the numerical results for large-scale unconstrained optimization illustrate that they are efficient.2. A conjugate gradient method only using gradient information is investigated to solve the first order optimization condition of the unconstrained nonlinear optimization. It is a combination of a hybrid Dai-Yuan conjugate gradient method and a practical Armijo type line search proposed by Dong. The improved method only uses gradient information and avoids function evaluation, so it has a superior numerical property and provides a way to solve nonlinear equations. The associated convergence is analyzed by theory analysis. Numerical experiments illustrate that the method is efficient to solve unconstrained optimization problems and boundary value problems, so it has broader application scope.3. Several modified basic and hybrid nonlinear conjugate gradient methods are combined with an Armijo type line search proposed by Dong to solve the first order optimization condition of the unconstrained nonlinear optimization. Their numerical behaviors are investigated by solving CUTEr test problems and the boundary value problems. Numerical results show that these methods are efficient to solve problems that only use gradient information, and the hybrid versions are more efficient than the basic versions.4. Two unified frameworks of some conjugate gradient methods with sufficient descent property are considered. Combined with the hyperplane projection method of Solodov andSvaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.5. A class of conjugate gradient methods with sufficient descent condition is considered to nondifferentiable convex unconstrained optimization. These methods are designed based on the proximal point method and the traditional conjugate gradient methods. They are globally convergent under some mild conditions.
Keywords/Search Tags:Conjugate gradient method, Large-scale unconstrained optimization, Nonlinear equation, Sufficient descent condition, Global convergence, Numerical experiments
PDF Full Text Request
Related items