Font Size: a A A

Some Algorithms In Unconstrained Optimization

Posted on:2007-09-22Degree:MasterType:Thesis
Country:ChinaCandidate:J F YangFull Text:PDF
GTID:2120360182499584Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Numerical methods for unconstrained optimization is an active subject in numerical analysis. It is very important to solve unconstrained optimization rapidly and effectively, which is not only itself of great importance but also it forms subproblems in many constrained optimization problems. Therefore, how to design fast and effective algorithms for unconstrained optimization is an important problem that optimization researchers care very much. In Chapter 2, a new class of steplengths for symmetric positive definite quadratic functions are presented and convergence results arc established. Moreover, we interpret the BB choice of steplength from the angle of interpolation and generalize the formulae given in [21] to a family. Numerical results show that some choices of the steplength work very well and they are comparable to the BB steplengths. In Chapter 3, we give some modifications to the conjugate gradient method and establish convergence results. The modified schemes generate descent directions at each iteration and thus there is no need to restart the algorithm. Numerical results show the efficiency of the modified methods and they perform better than the famous PRP and HS conjugate gradient methods. In Chapter 4 of this thesis, a subspace method is constructed for unconstrained optimizaiton and convergence theorem is established for uniformly convex functions. Subspace method uses storage reasonably and converges rapidly.
Keywords/Search Tags:Unconstrained Optimization, BB steplength, Conjugate gradient method, Subspace method
PDF Full Text Request
Related items