Font Size: a A A

Convergence Properties Of Two Classes Of Methods For Solving Unconstrained Optimization Problems

Posted on:2013-02-24Degree:MasterType:Thesis
Country:ChinaCandidate:X HuFull Text:PDF
GTID:2210330374461530Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Nonlinear conjugate gradient method is a class of important methods for solvingunconstrained optimization problems, especially for large scale problems. It is widelyused because of their simplicity, small storage and fast convergence. Many practicalproblems, such as oil exploration, atmospheric modeling, aerospace and others, whichare all large scale optimization problems, is often solved by conjugate gradient method.Trust region method is another class of important methods for solvingunconstrained optimization problems. Trust region method is presented in the1970s,and widely studied by many scholars in nonlinear optimization area and these methodsare always one of attractive issues in the field of nonlinear optimization. Trust regionmethods have two advantages. One is that they have strong stability, the other is thatthey have the global convergence. This thesis further investigate conjugate gradientmethod and trust region method for solving unconstrained optimization problemswithout convexity assumptions for the objective function. The two main results of thisthesis are as follows:Firstly, we present two classes of nonlinear conjugate gradient methods for solvingunconstrained optimization problems and prove their global convergence with thestrong Wolfe line search. Numerical results shows that the two classes of methods areefficient.Secondly, we give a new trust region method for solving unconstrainedoptimization problems, based on the quadratic model and a fixed-step modified BFGSformula, and prove its global convergence properties.
Keywords/Search Tags:Unconsrained optimization, strong Wolfe line search, conjugate gradientmethod, trust region algorithm, global convergence
PDF Full Text Request
Related items