Font Size: a A A

On Sparse Quasi-Newton Method For Unconstrained Optimization Problem

Posted on:2004-02-17Degree:MasterType:Thesis
Country:ChinaCandidate:G SunFull Text:PDF
GTID:2120360092995270Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
There are four parts in the thesis. Firstly, we simply introduce the scope, value and developement of the sparse quasi-newton method for unconstrained optimization problems.Secondly, while the modified matrix Hk is a diagnal matrix, we present a diagnal sparse Quasi-Newton method for unconstrained optimization problem.2. Diag-Sparse-quasi-Newton method with Armijo's Line Search 2.1 IntroductionIn 1988 Barzilai and Borwein gave such iterative formula where Sk = kI, and k minimizes || x - g||2, with x = Xk-xk-1, g = gk - gk-1. k was given by: k = ( x, g)/( g, g] (a.b) denotes the usual inner product of the vectors a and b.The authors carried out a convergence analysis of their method only in the two-dimensional case. Based on their idea and previous algoritms, we propose a sparse Quasi-Newton method. Let Hk be sparse matrix and satify Quasi-Newton equation and Hk = diag(hk1, hk2, ... ,hkn) minimize approximatelywhere yl-1i,Sk-1i respectively denotes the component of yk-1 and Sk-1. Thenwe have2.2 AlgorithmAssumption (H2.1): f C1 and the set L(x0) = {x Rn|f(x) f(x0)} is bounded.Algorithm 2.1stepO. 0 < < 1, x0, x1 Rn, k := 1;stepl. if ||gk|| = 0, then the algorithm stops, otherwise goto step2; step2. calculate Hk from (*), dk = -Hkgk;step3. xk+1 = Xk + ctkdk, where k is the largest number in the set {1, 1/2, 1/22,... } sucn thatf(xk) - f(xk + dk) - gkdk;step4. k := k + l,go to stepl.Let Lk = c1||gk||2, Uk = Lk + c2 at each iteration.2.3 ConvergenceLemma 2.1. Assume (H2.1) holds, if the sequence {xk} generated by the algorithm is infinite, then cos < -gk,dk > , where < -gk,dk > denotes the angle of vectors - gk and dk.Theorem 2.1. If H2.1 holds, then the algorithm stops in some k,or generates infinite sequence {xk} which any limit point is stationary point.2.4 Linear ConvergenceAssumption(H2.2): Let f(x) : Rn -> R be twice continuously differ-entiable in some domain of x* which is the minimal point of f(x), and exists e > 0, M > m > 0, if \\x - x*|| < holds, then the following inequality is satisfied,Lemma 2.2. If (H2.2) is satisfied ,then we haveLemma 2.3. Assume (H2.1) and (H2.2) are satisfied , then inf ak = TJO > 0.Theorem 2.2. Suppose (H2.1) and (H2.2) hold, if the algorithm generates the sequence {x^} which convergences to minimal point x*, and 0 < L < Lk < , then {xk} convergences linearly to x*.2.5 Superlinear ConvergenceAssumption (H2.3): 2f(x) The Hessen matrix of f(x) satisfies Lips-chitz continuity in some domain of the minimal point x*, that is > 0 , for all x, y € N(x*,e), such thatLemma 2.4. Let F : Rn - Rm be continuously differentiable in the open convex set .D, then the following inequaility is satisfied||F(u)-F(v)-F'(x)(u-v)|| < [sup \\Fl(v+t(u-v))-Ff(x')||]||u-v||, Vx,u,v 6 D. In addtion let F' be Lipschitz continuous in the set D, then we getandwhere o(u,v) = max{||w - x\\, \\v - x\\}.Lemma 2.5. Suppose that F, F1 satisfy the conditions of Lemma 4, if [F'(a:)]~l exists, then Be > Q,/3 > a > 0, such that a\\u - v\\ < \\F(u) -F(v)\\ < 0\\u-v\\, Vu,v e D , while if max{||u - rc||, ||v - z||} O.o;* is the minimal point of f(x);(3) / is Lipschitz contiuous in the set D.Theorem 2.3. Based on the assumptions (H2.1), (H2.2) and (H2.3), if infinite sequence {zfc} generated by the algorithm converges to the minimal point x*, then {xk} converges superlinearly to x*, if and only if2.6 Numerical ExperimentThe following numerical experiment shows that the new algorithm is effective.Table 2.1.3. Further Convergence Property 3.1 AlgoithmThe convergence property of diagnal sparse Quasi-Newton method is proposed. We know that the convergence rate is related to Lk, Uk according to the previous study, t...
Keywords/Search Tags:Unconstrained
PDF Full Text Request
Related items