Font Size: a A A

Memory-Predicted Methods For Solving Unconstrained Optimization Problems

Posted on:2007-11-30Degree:MasterType:Thesis
Country:ChinaCandidate:J Y TangFull Text:PDF
GTID:2120360182493310Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
The thesis mainly considers the memory-predicted methods for solving unconstrained optimization problems. Four chapters are included in this dissertation.Chapter 1 gives an introduction to this thesis, which mainly contains the current research situations of memory-predicted method. Furthermore, the main contribution of this paper is also listed in this chapter.In chapter 2, we consider the unconstrained optimizations. For this problem, Shi (2003) proposed a new memory gradient method and proved the globle convergence under exact line search rule. As we know, the exact one-dimensional line search usually requires much greater amount of computation. Moreover, when the iteration is far from the solution of the problem, solving a one-dimensional problem exactly is often uneffective. In this paper, we modify the parameter in the original algorithim, which chooses the parameter βκ in the search direction The global convergence and linear convergence rate of the new method are also proved under some mild conditions.In chapter 3, we present a new memory gradient method for unconstrained optimization problems. The new algorithm possesses the following features. (1) At each iteration, the method chooses a combination of the current gradient and some previous seach directions as a new seach direction, which has a more simpler formulation and does not need to compute and memorize matrices. Therefore, this method needs much less compution and storage so that it is suitable tosolve large scale optimization problems. (2) Although the parameter /?* in the search direction d* is similar to the Doxin formula, the new method has global convergence under strong wolfe line seaxch. While author[16] has proved that strong wolfe line seaxch can not guarantee the global convergence if /3jt takes Doxin formula in the gradient method. Numerical experiment shows that the new algorithm is not only feasible but also very effective.In Chapter 4, we study a new memory-predicted methods for unconstrained optimizations . The new method only needs to estamate some parameters at each iteration of the algorithm without the line search for step-size. By this way, the algorithm reduces the number of iterations, function evaluations and gradient evaluations so as to reduce the computation and storage of the method. Moreover, the method can use the current and previous multi-step iterative information to predict the parameter in algorithm so as to accelerate convergence of algorithm and make it stable and robust.
Keywords/Search Tags:Unconstrained optimizations, memory-predicted methods, line search rule, linear convergence rate, global convergence
PDF Full Text Request
Related items