The Quasi-Newton methods is one of the most effective mothods for solving the unconstrained optimization problems, whose basic idea is to estimate the second order derivatives with the first order derivatives.The main differences between different types of the Quasi-Newton methods are: the change way of estimate of the second order derivatives from one iterate to another iterate, and the types and accuracy of line search. so it produces a series of approximation matrix B_{k+1} to the second order derivatives of the objective function. The nature of B_{k+1} is B_{k+1} = B_{k} + A_{k}, in which B_{k} is the approximation matrix at the last iteration, A_{k} is some matrix. In this paper, the writer first propose the conditionwhich can satisfy A_{k} is s_{k}^{T}A_{k}s_{k} =θ_{k}, in whichθ_{k} = 2(f_{k}-f_{k+1}) + s_{k}^{T}(g_{k+1}+g_{k}).And then three formulae to satisfy A_{k} are given: (1)(?); (2)(?); (3)(?), in whichu_{k},u_{k}∈R^{n}, and satisfy s_{k}^{T}u_{k}≠0,s_{k}^{T}u_{k}≠0. From which six reasonable choices are gotten: (1)(?)Based on those choices, the three corresponding algorithms are proved to possess global convergence property. At last eighteen popular test functions have conducted which show that the proposed algorithms are very encouraging. |