Seeking fast theoretical convergence and effective algorithms in unconstrained optimization is a very interested research topic for the optimization specialists and engineers. Paper [28] gives a class of non-quasi-Newton algorithms about unconstrained programming problems based on the modified non-quasi-Newton equation. In this paper, we give a class of superlinearly convengent algorithms for nonlinear programming problems with unconstrained by combining non-quasi-Newton methods [28] with some inexact line searches.In chapter 1 ,we first introduce the development of optimization and some extensive optimality conditions which to decide the optimum solution. We review several extensive derivative descent methods of unconstrained programming.In chapter 2,the non-quasi-Newton's family is concerned with the problem of whether the method with inexact line search converges globally when applied to unconstrained optimization problems.We propose a update and prove that the method with either a Wolfe-type or an Armijo-type line search converges globally if the function to be minimized has Lipschitz continuous gradients.In chapter 3,the non-quasi-Newton Methods for unconstrained optimization is investigated.Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family.Under the uniformly convexity assumption on objective function,the global convergence of the quasi-Newton family is proved. |