The method for solving unconstrained optimization is an important branch in optimization methods. Among many unconstrained optimization methods, the memory gradient method and super-memory gradient method generate the next iteration point by using the previous iterative information. Because of strong convergence and little calculation effort, these algorithms have a wide application.The main results in the dissertation are achieved as follows:In chapter 2, a modification to a memory gradient without line search is given, which expands the existing algorithm in the literature; In chapter 3, a new super-memory gradient method for unconstrained optimization problem is proposed. Since this algorithm only solve a system of linear equations instead of a quadratic subproblem with a trust region bound, the computational efficiency is improved; In chapter 4, a implementation version is given for the trust region gradient method with memory model. Numerical results show that this scheme is feasible.
|