Font Size: a A A

Krylov Subspace Methods For Large Matrix Eigenvalue And Linear System Problems

Posted on:2009-10-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q NiuFull Text:PDF
GTID:1100360272988824Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Large scale eigenvalue problems along with large sparse linear system of equations are at the heart of many large scale computations in science, engineering, industry simulation, finance and so on. In order to solve the real problem completely, considerable CPU time will be spent in solving the linear equations or calculating some meaningful eigenpairs. Therefore, the simulation process can be accelerated considerably, if the associate matrix computation problems are solved efficiently. During the past twenty years, these large scale matrix computation problems have always been one of the most active areas in computational mathematics, and they are widely investigated by many experts all over the world. Many successful algorithms have been proposed, particularly, Krylov subspace type algorithms have made considerable progress. However, there are still many problems unsolved, and many algorithms need to be analyzed and improved. In view of the importance and the long-range significance of these problems, we endeavor to develop and analyze Krylov subspace type algorithms for solving large scale eigenvalue problems and large sparse linear system of equations. This thesis comprises eight chapters.In Chapter§1, we firstly give a brief review of the background and projection type solvers of eigenvalue problems. Then we briefly recall the development of iterative method and preconditioning techniques for solving large sparse linear system of equations.Chapter§2 and Chapter§3 deal with large scale eigenvalue problems. In Chapter§2, we propose an Arnoldi-type algorithm for computing a few interior eigenvalues of large scale matrices. The convergence analysis, and some comparisons between the algorithm and the harmonic Arnoldi algorithm are done. The results show that the algorithm is efficient, especially when the subspace dimension is small. When enlarging the subspace dimension, both methods improve quickly and the convergence behaviors are close to each other.In Chapter§3, we present a class of deflated block Arnoldi type methods for computing partial eigenpairs of large matrices. The Rayleigh-Ritz procedure and its refined variants are analyzed. At the beginning of each restart, the computed approximate eigenvectors are used to form the next initial block vector. By the inexact deflation prodecure in the orthogonalization process, the size of the initial block will be automatically reduced by one whenever an eigenpair achieves convergence, we reveal that the procedure can not only retain the information of the approximate eigenvectors that have converged, but also makes the new generated approximate subspace more favorable for the unconverged eigenpairs. Theoretical analysis shows that the methods are robust and can inherit the advantages of the regular block Krylov subspace methods for solving multiple or closely clustered eigenvalue problems. Qualitative analysis shows that the methods also have the property of gradually possessing the merits of the restarted Arnoldi methods augmented with approximate eigenvectors. Numerical experiments show that the new methods can mitigate the irregular convergence behavior of the block Krylov subspace methods, and exhibit rather smooth convergence behavior.From Chapter§4 to Chapter§7, we investigate the iterative solvers of large sparse linear systems. Particularly, in Chapter§4, Based on the phenomena that the skip angles of GMRES-E are usually small, we proposed to accelerate the convergence of GMRES-E by augmenting error approximations, which results in a combined augmented restarted GMRES method. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate small eigenvalues in magnitude that may hamper the convergence of the method; (ii) partially recover the linear independence of the basis. The numerical results show that the method can efficiently accelerate the convergence of GMRES-E method, and especially suitable for linear systems with a few relatively small eigenvalues.In Chapter§5, an accelerating strategy of GCRO-DR method is proposed. By recycling some error approximations generated during previous restart cycles, the new method is able to mitigate the occurrence of small skip angles in GCRO-DR, so that it can effectively shrink the phase of slow convergence. If just one error approximation to be retained, we show that the accelerating strategy can be done in a natural way. However, if more than one error approximations to be retained, we analyzed an inexact variant, which is cheap and effective. Applications of the new method for solving a single and a sequence of linear systems are discussed, and the efficiency of the new method is illustrated by some examples from practical applications. In particular, for solving sequence of linear systems arising from fatigue and fracture engineering components, we show that the convergence of the new method can reduces the overall cost by nearly 10%.In Chapter§6, the tangential frequency filtering decomposition and combination preconditioning are discussed. Particularly, we first show that the convential filtering decomposition can also be carried out by satisfying the left filtering property. Then the two sides tangential frequency filtering decomposition is introduced. By combining the new filtering preconditioner with the classical ILU(O) preconditioner in multiplicative ways, composite preconditioners are produced. Analysis show that the composite preconditioners benefit from each of the preconditioners. On the filtering vector, we reveal that ones is a reasonable choice, which is efficient and can avoid the preprosess needed in other methods to build the filtering vector. By setting an appropriate initial solution, the preconditioner by using ones as the left filtering vector ensures the sum of the residual vector is zero all throughout the iterations, i.e. material balance error is zero. Numerical test show that the composite preconditioner is rather robust and efficient for block tridiagonal linear systems arising from the discretisation of partial differential equations with strongly varying coefficient.In Chapter§7, we discuss preconditioning techniques for saddle point problems. By utilizing the structure of the (1,1) block of the saddle point problem along with the structure of constraint preconditioner, we propose a tangential frequency filtering decomposition type preconditioner for saddle point problems. We test the preconditioner on the saddle point problems discretized from Stokes and Oseen equations. We analyzed the properties of the preconditioner when the grid size refines and the Reynolds number increases. The numerical results show that: for fixed inner iteration tolerance and the fixed Reynolds number, the outer iteration number increases slowly when the grid size refines; for fixed inner iteration tolerance and the fixed grid size, the outer iteration numbers only slightly influenced by the Reynolds number. The later property more appealing than the same type of ILU preconditioner. In the later part of this chapter, we propose a class of Schilders factorization type constraint preconditioner for regularized saddle point ptoblems. The spectrum analysis shows that the preconditioned matrix inherits properties of preconditioned matrix by using the Schilders factorization type constraint preconditioner for standard saddle point problems. Therefore, the factorization proposed in this paper naturally generalized the Schilders factorization to the context of generalized saddle point problems. However, due to the nonzero (2,2) block appeared in the regularized saddle point problem, the application of the preconditioner involves Schur complement type linear systems, which generally poses difficulties. For some special regularized saddle point problems often arise from practical applications, an inexact variant of the preconditioner is analyzed, which can avoid the Schur complement type computation. Numerical tests are under preparation.In the last Chapter, a modified tangential frequency filtering decomposition (MTFFD) preconditioner is proposed. The optimal order of the modification and the optimal relaxation parameter is determined by Fourier analysis. The Fourier results shows that the condition number of the preconditioned matrix is (?)(h-2/3), and the spectra distribution of the preconditioned matrix can be predicted by the Fourier results. The performance of MTFFD is compared with tangential frequency filtering (TFFD) preconditioner on a variety of large sparse matrices from discretization of PDEs with discontinuous coefficients, the numerical results show that the MTFFD preconditioner is much more efficient than the TFFD preconditioner.
Keywords/Search Tags:Krylov subspace, GMRES, eigenvalue problems, linear system, saddle point problems, preconditioner
PDF Full Text Request
Related items