Font Size: a A A

Some Algotithms In Nonlinear Optimization Problems

Posted on:2008-03-31Degree:MasterType:Thesis
Country:ChinaCandidate:Y M ZhengFull Text:PDF
GTID:2120360218963641Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
A new assumption is given on the scalar of conjugate gradient method to ensure that the direction is a sufficient descent direction. A new class of memory gradient method for unconstrained optimization is presented and global convergence properties of the new algorithm are discussed on condition that the gradient of the objective function is uniformly continuous. Numerical results show that the new algorithm is efficient. And then the algorithm is applied to the following aspects: (i) Based on a reformulation of the complementarity problem as unconstrained optimization, the memory gradient method is used for solving the complementarity problem. The global convergence and the linear convergence of the new algorithm are shown. Numerical results show that the new algorithm is efficient. (ii) Based on a reformulation of the semi-definite complementarity problem as unconstrained optimization by the generalized D-gap merit function, the memory gradient algorithm is used for solving the semi-definite complementarity problem, which is proved globally convergent. Furthermore, by adding the numbers of memory term, the memory gradient algorithm is generalized to the three-term memory gradient method. On choosing the step size of the algorithm, a new non-monotone line-search rule is proposed which can get a larger step size at each iteration and be benefit to the fast convergence of the algorithm. The global convergence property of the new algorithm is discussed on condition that the gradient of the objective function is uniformly continuous and the linear convergence is also discussed under certain conditions. Numerical results show that the new algorithm is efficient. Finally, the new three-term memory gradient method is used to solve the trust region sub-problem, which ensures the trial step is sufficient descent. A new algorithm for unconstrained optimization that employs both trust region and line-search is proposed, and the algorithm does not resolve the sub-problem if the trial step results in an increase in the objective function, but performs a new inexact Armijo line search to obtain the next iteration point. The convergence property of the algorithm is proved under certain conditions. Numerical results show that the new algorithm is efficient.
Keywords/Search Tags:Memory Gradient Method, Semi-definite Complementarity Problem, Non-monotone Line-search, Subproblem of Trust Region Algorithm, Inexact Armijo Line-search
PDF Full Text Request
Related items