Font Size: a A A

Some Researches About The Supermemory Gradient Method For Unconstrained Optimization Problem

Posted on:2014-08-25Degree:MasterType:Thesis
Country:ChinaCandidate:W MaFull Text:PDF
GTID:2250330401474276Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Based on the fixed step-length and non-monotone techniques,we presented two superme-mory gradient methods which are used to solve unconstrained optimization problem, denoted by Algorithm A and Algorithm B.In Algorithm A,we use the fixed step-length technique and back-tracking line search, tech-nique, thus reducing the number of iterations an function evaluations;In Algorithm B, we improve the traditional non-monotone technique in order to make it more suitable for solving large-scale optimization problems.Under certain conditions, both of them are globally convergent.Numerical results show that they are effective and feasible.
Keywords/Search Tags:unconstrained optimization, supermemory gradient method, fixed step-lengthtechnique, non-monotone technique, convergence, numerical experiments
PDF Full Text Request
Related items