Font Size: a A A

Study Of High-performance Algorithms For Saddle Point Problems And Markov Chain Problems

Posted on:2013-01-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:C WenFull Text:PDF
GTID:1110330374986982Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Solutions of large-scale sparse linear algebraic systems are deeply involved in vari-ous scientific and engineering fields, such as simulations of communication network andtransport phenomena, economic management and market forecasting, numerical solu-tions of high-order differential equations, fluid mechanics, computational electromagnet-ics, reservoir modeling and optimization problems. With the rapid development of sci-ence and technology, the requirements to the speed and accuracy of solving large-scalesparse linear algebraic systems become more and more. Therefore, research of methodsfor solving large-scale sparse linear algebraic systems is still one of the key issues oflarge-scale scientific and engineering computing and such research has important theo-retic significance and practical applications. This doctoral dissertation deeply studies thedirect methods, matrix splitting methods, Krylov subspace methods and preconditioningtechniques for solving large-scale sparse linear algebraic systems which arise from saddlepoint problems and Markov chain problems. The dissertation consists of four parts withsix chapters:Part one contributes to preconditioning techniques for iteration solutions of sad-dle point problems. A modified symmetric successive overrelaxation-like (MSSOR-like)method is firstly proposed for solving saddle point problems. Its convergence conditionsare discussed, and the optimal domain of iteration parameter is given. Numerical ex-periments verify the effectiveness of the MSSOR-like method. Next, a modified SSORpreconditioner is established for solving singular symmetric saddle point problems bychanging the values of the (2,2) block matrix. The convergence conditions and spectralproperties of the MSSOR preconditioned matrix are studied comprehensively. The ef-ficiency and stability of the proposed preconditioner are shown through the numericalresults. At last, a product preconditioner is constructed for solving the symmetric in-definite saddle point matrices as the product of two fairly simple preconditiners. One isthe popular constraint preconditioner, and another is the famous block Jacobi precondi-tioner. Results concerning the eigenvalue distributions and form of the eigenvectors ofthe product preconditioned matrix and its minimal polynomial are analyzed. Numericalexperiments show the efficiency of the proposed product preconditioner in reducing the computational time and iteration counts.Part two is to study matrix splitting methods with applications to Markov chain prob-lems. A large amount of computational methods for nonsingular large-scale sparse linearsystems is not able to be directly applied to solve the Markov chain problems since theircoefficient matrices are often irreducible and singular. For overcoming this difficulty, atheorem is firstly presented to indicate that there exists a nonnegative constant suchthat the shifted linear system is positive-definite. Then two iteration methods are devel-oped to compute the stationary probability distribution vector of Markov chains basedon matrix splitting methods. One is the symmetric and skew-symmetric splitting (SSS)iteration method, and another is the triangular and skew-symmetric splitting (TSS) it-eration method. Both of the SSS and TSS iteration methods include two subsystems.Convergence properties and optimal parameter selections of these methods are studied.In practical computation, if these subsystems are solved exactly with direct methods, thenthe SSS and TSS iteration methods are meaningless, because the cost for solving thesesubsystems is equal to that for solving the original system by direct solvers. Consequently,for improving their convergence, we solve them inexactly by Krylov subspace methodsand develop ISSS and ITSS methods. Numerical results illustrate the effectiveness of theproposed methods.Part three proposes an accelerated multilevel aggregation algorithm for Markovchains by combining with vector extrapolation methods. The dissertation briefly intro-duces the background of the multilevel aggregation algorithm and vector extrapolationmethods at first. Then the main drawback of the multilevel aggregation algorithm forMarkov chains is presented. That is, the construction of the restriction and prolonga-tion operators depends on the update of the stationary probability distribution vector ateach iteration step, and thus the computational time is unacceptable. Therefore, takingadvantages of the merit of the vector extrapolation methods, the accelerated multilevelaggregation algorithm is established for Markov chain problems. Numerical experimentson several representative Markov chain examples are used to illustrate the effectivenessof the proposed algorithm.Part four studies two conjugate direction methods (Bi-CR and Bi-CG) for comput-ing the stationary probability distribution vector of Markov chains. Motivated by thecelebrated applications of the two conjugate direction methods to nonsingular linear sys- tems, this dissertation attempts to extend the Bi-CR and Bi-CG methods to singular linearsystems which arise from Markov chain problems, as one of the possible computationaltools. Numerical experiments examine the Bi-CR and Bi-CG methods and illustrate bothof them work well for Markov chain problems.
Keywords/Search Tags:linear algebraic systems, matrix splitting iterative methods, Krylov subspacemethods, preconditioner, saddle point problems, Markov chain problems, stationary probability distribution vector
PDF Full Text Request
Related items