Font Size: a A A

Parallel Nonstationary Iterative Methods For Solving Large Sparse Linear Systems

Posted on:2002-10-23Degree:DoctorType:Dissertation
Country:ChinaCandidate:T X GuFull Text:PDF
GTID:1100360032452084Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
At present, a hotspot problem to be solved is to make the best use of the potential capability of parallel computers and investigate high efficient parallel methods for solving large sparse linear systems. The key point for designing of parallel algorithms and implementation of parallel programs is to reduce global communication and maintain load balance between processors according to architecture characters of different parallel computers. In this thesis, based on the above idea, we study systematically parallel nonstationary iterative methods applied to high performance parallel computers for solving large sparse linear systems. The main work is divided into two parts. Chapter 3 and 4 constitute part one. From the point of reduceing global communication and reasonable distributation of data, we complete the following work to distributed memory massively parallel processing system: (1) By introducing multiple search directions, we proposed a new type of conjugate gradient method, which is called multiple search directions conjugate gradient method, MSD桟G method in brief. The method is based on domain decomposition method. In MSD桟G method, we replace the computation of global inner product of traditional CG method by the solution of smaller linear systems, which can be solved by direct or iterative methods. The structure of the smaller linear systems is based on the way of domain decomposition and the choice of multiple search directions. We obtain an approximate version of MSD桟G method if we solve the smaller systems approximately by iterative method. This approximate version only requires communication between neighboring subdomains and eliminate global inner product entirely. We call it global inner product free conjugate gradient (GIPF桟G) method, which is therefore well suit for massively parallel computation and is an efficient implement version of CG method on massively parallel computer. The preconditioned version of MSD桟G method is also introduced. (2) We set up the theory of convergence and consistent of MSD桟G and GIPF桟G method. The basic properties and estimates of convergence rate of MSD桟G method are obtained. We show that MSD桟G method converges at least faster than the steepest descent method. (3) We do lots of numerical experiments on Dawn 3000 and P桰l Cluster designed by ourselves. The results show that although GIPF桟G converges slightly slower than that of CG method, but the efficient parallelism of our method makes it higher efficient globally. Since the global information is contained in the smaller systems, our method interchanges global information at each step. Therefore our methods conquer the severely descendant of additive Schwarz domain decomposition method as the increase of number of subdomains. Our method can also be improved obviously by preconditioning. The idea of algorithm designing of MSD桟G and GIPF桟G can be easy generalized to the case of solving nonsymmetric problems. It is of guiding significance for the parallelization of many methods, such as CR, CGS, BiCG, BICGSTAB, CGNR, CGNE, QMR and so on. Part two consists of Chapter 5 and 6. We design relaxed nonstationary two梥tage multisplitting methods, which are natural parallel and better load balance. The limitations of (stationary) multisplitting and two梥tage multisplitting methods are synchronism and load unbalance becomes severer when the number of processor increase...
Keywords/Search Tags:Large sparse linear systems, parallel computing, nonstationary iterative method, multiple search directions conjugate gradient, inner product, global communication, relaxation, two-stage multisplitting, asynchronous
PDF Full Text Request
Related items