Font Size: a A A

Preconditioning Method For Saddlepoint Problems

Posted on:2008-04-24Degree:MasterType:Thesis
Country:ChinaCandidate:W Y CaiFull Text:PDF
GTID:2120360212996104Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Saddle point linear system (1) is a symmetric and indefinite linear system of equations.where M∈Rm×m is symmetric and positive definite, B∈Rm×m, x,f∈Rm, y,g∈Rn and m≥n. It is well known that systems of (1) have a unique solution x∈Rn,y∈Rm.The system (1) arises from many scientific research and engineering computations such as discretization of partial differential equations, the optimization problems and least square problems.For it's wide practical applications, it's of great interest to develop efficient iterative methods for such problems.We have a lot of numerical methods to solve the saddle point linear system, such as Uzawa Method in paper[4] and the Minimum Residual Method in paper[2]. But the convergence of the two kinds of method depend on the condition numbers of A which is relate to the discretization parameter h. In order to speed up the convergence we will therefore consider a preconditioned version of the method.Firstly, we estabish a simple result relating A(A) to properties of the matrices M and B. These properties will be related to the convergence rate of the minimum residual method.Lemma 2.1 Letμ1≥μ2≥…≥μn > 0 be the eigenvalues of M,σ1≥σ2≥…≥σm > 0 be the singular values of B, and denote byΛ(A) the spectrum of A. Then whereThe approximation xk∈Vk of x* is determined byFrom (2) we can derive a convergence estimate for the method. Recall that rk = b - Axk satisfiesUsing spectral theory we easily obtain from (3) thatwherePk ={p| p is a polynomial of degree less than or equal k and p(0)=1}.To obtain the estimate ofδk(I), we obtain that (i)δk(αI) =δk(I), (?)α≠0. (j) if (I|) (?) I, then obverselyδk((I|))≤δk(I).The estimate (4) therefore clearly indicates that the minimum residual method converges faster when we shrink the intervals I≡I-∪I+.By scaling I for (i)(j) such that the lower limit of I+ is unity, we can assume that I-, I+ are given by: Hereρ(B, M) =σm/μn denotes the relative scaling between the matrices M and B.The expressions given by (6) indicates that the convergence rate of the minimum residual method can be estimated by the three parametersκ(M),κ(B)andρ(B, M). If the scaling factorρ(B, M) is fixed, it is easy to see that a reduction in one of the condition numbersκ(M) andκ(B) will lead to a proper shrinking of the intervals I-∪I+. As a conclusion of we therefore obtain that if we can improve the conditioning of the matrices M and B, while the relative scaling between them is kept roughly fixed, the minimum residual method applied to the system (1) will converge fast.Let L∈Rn×n and R∈Rm×m be nonsingular matrices, and Let S1/2 = diag(L, RT). ThenHence, the system (1) is equivalent to the system (7)where v = LTx,w = Ry. The system (7) has the same structure as (1) . In fact, (7) is equivalent to the constrained minimization problemmin1/2〈L-1ML-Tv,v〉-〈L-1f,v〉subject to (L-1BR-1)Tv = R-Tg.If L and R are such thatwhilethen,the minimum residual method applied to (7) will usually converge faster than if the method is applied to the original system (1) . The choice of the preconditioning matrices L and R is, of course, problem dependent. The problem of designing an effective preconditioner of the form discussed above for the system (1)is, in some sense, equivalent to the problem of designing effective preconditioners for the symmetric and positive definite matrices M and BT(LLT)-1B. Furthermore, if the original system is properly scaled, the scalling factorρshould not be significantly changed by the preconditioning of the system. We choose the incomplete Cholesky factorization to the discrete systems arising from mixed finite element methods for some second-order elliptic problems and to discrete Stokes problems.Finally, an application of the Preconditioned Minimum Residual Method in two model problems are considered. By computation we observe that the Preconditioned Minimum Residual Method reduces the number of iterations significantly compared to the case with no preconditioner. ( You can see the tabulations in chapter 5 for more details). We have also proved that the condition number after precondition is obviously improved.In conclusion, the incomplete Cholesky factorization is an effective precondition method for our model problems and the Preconditioned Minimum Residual Method is more suitable to saddlepoint problems.
Keywords/Search Tags:saddlepoint problems, mixed finite elements, minimum residual methods, preconditioning
PDF Full Text Request
Related items