Quasi-Newton methods are regarded as the most efficient ones for solving unconstrainedoptimization problems. Meanwhile, the main idea of them can be used to solve constrainedoptimization problems .As we all know, Quasi-Newton equations lay the basis ofQuasi-Newton methods, according to the time when the Quasi-Newton equations appear, wecan classify them as :the original and the new ones. The original Quasi-Newton equationonly exploits the gradient difference of the recent two iterates while ignoring the availablefunction value information; in order to gain more precise Quasi-Newton equations, a greatmany specialists have made modification on the original Quasi-Newton equation or proposednew Quasi-Newton equations which could exploit both the gradient difference and functionvalue.At first, some observation of the recent new Quasi-Newton equations which possessbetter approximation property is made in this paper and some of these Quasi-Newtonequations are written in a unified form. This class of Quasi-Newton equations includes theQuasi-Newton equation proposed by Jianzhong Zhang,the one proposed by YunhaiXiao and the original Quasi-Newton equation. Based on Donghui Li's modificationidea, we make some modification of this class to get a new class of Quasi-Newtonequations. The second, construct the BFGS Quasi-Newton method based on this class ofmodified Quasi-Newton equations; under convex condition, proof of the globalconvergence of this method based on the class of modified Quasi-Newton equations isgiven(As regarding to the new Quasi-Newton equations, the global convergence propertyis always given under the uniform convex condition,the work comprise paper[5],[12],[56]and so on); At last, this paper completes the local and superlinear convergence of BFGSmethod and DFP method based on the class of modified Quasi-Newton equations. We alsomake an observation of the suitable choice of {r_k}, and the effectiveness of numericalexperiment is given. |