Font Size: a A A

One Method For Unconstrained Optimization Problems

Posted on:2009-06-21Degree:MasterType:Thesis
Country:ChinaCandidate:Q F FuFull Text:PDF
GTID:2120360245472860Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Optimization thesis is a new numerical method which is developed about 30 ages of 20 century. Numerical method for unconstrained optimization is an active subject in numerical analysis. It is very important to solve unconstrained optimization rapidly and effectively, which is not only itself of great importance but also it forms sub-problems in many constrained optimization problems. Therefore, how to design fast and effective algorithms for unconstrained optimization is an important problem that optimization researchers care very much. In recent years, the methods to decide the problem are too much and the thesis is mature. They all have respective superiority and meanwhile also have themselves defects at different degree. How to improve the methods? To make different methods combined to get a hybrid algorithm is an effective way to solve the problem.At first, we thorough analyze several typical unconstrained optimization methods and line search methods, specifically study Newton method. Typical Newton method and its revised forms all demand hessen matrix positive definite or semi positive definite so as to guarantee Newton equation or revised Newton equation having solution, meanwhile which is a descent direction of the objective function. Otherwise, Newton method is not feasible.Second, to overcome this difficulty, we develop a hybrid method that combines the regularized Newton method with the steepest descent method to resolve nonconvex objective function. In the case descent directions of the objective function always exist and have nothing to do with its convexity, when the regularized Newton direction may not exist or exist but is not a descent direction of the objective function, we can use the gradient to replace the search direction. Under milder conditions and five line search methods, we prove the global convergence of the new method. Moreover, we show that after finite iterations, the hybrid method essentially reduces to the regularized Newton method. Consequently, it possesses locally quadratic convergence property.Finally, the new method is used to resolve nine typical problems, meanwhile compares iteration times of using Newton direction and gradient direction. Numerical results show Newton direction used better than gradient direction. The new method owns converges rapidly and feasible.
Keywords/Search Tags:Convex Function, Unconstrained Optimization Problem, Regularized Newton Method, Global Convergence, Quadratic Convergence
PDF Full Text Request
Related items