Font Size: a A A

Overlapping Domain Decomposition Algorithm For 2-D Poisson Equation And Its Parallel Implementation

Posted on:2010-02-17Degree:MasterType:Thesis
Country:ChinaCandidate:C D WeiFull Text:PDF
GTID:2120360272495875Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
The last century to the forties and early fifties, the first electronic com-puter and the first stored-program computer, Vom Nenumann Computer,havecome out in succession.Since then, about every five years, the computer'sspeed increase by ten-fold.Fifty years on behalf of the computer are the par-allel structure, each clock can only be a step-by-step instructions to operateon a data. Because of electron information transfer rate to limit the speedof light alone has been di?cult to improve the line was calculated with thedesired performance, parallel computer performance have been near the phys-ical limits.Some computer experts have commented, it is necessary to developmore than 10 billion times per second serial Computer is very di?cult, andthis rate is far below the requirements.From the beginning the sixties of the last century , people begin to explorethe introduction of parallel computer architecture and design, put forward theidea of the development of parallel computer.The year of 1972 United StatesNational Aeronautics and Space Administration Ames Research Center to in-stall the world's first parallel computer Illiac IV, have 64 million Units of theprocessor component.This advent of parallel computer, marking the begin-ning of the new era in scientific computing, and this is the new era of parallelcomputing.Current academic and industry research has been developed byleaps and bounds in the field of high-performance computers.June 2008 IBMRoadrunner supercomputer issued tens of millions of times,so made us fullof confidence on the future of thousand milion times computer.Domestically,high performance computing has achieved great development in many ar-eas recent years.High-performance computers more and more low-cost hard- ware, standardization, openness and cost-e?ective cluster system has beganto be accepted by the majority of users, greatly reduced the area of accessto the threshold of high-performance computing.At the same time, in the oil,aerospace, space, weather and other fields has been developed a number ofuseful, high level applications, a great convenience to the user.However, the development of high-performance computing industry, thereare still structural contradictions, highlighted the performance of hardwareand software development, application development can not keep up with thepace of development high-performance.Beijing Applied Physics and Compu-tational Mathematics Professor Yuen Kwok-hing, a researcher at the Institutepointed out: In the research of high-performance computing software ,the ex-istence of"the high-performance computing software development equivalentto software programming,""hardware, algorithms, software research out oftouch"and other unreasonable situation, resulting in high-performance com-puter software in China is facing"the calculation scale Restricted, calculationaccuracy, resolution is not high, the key to the application of restricted anddi?cult to improve and develop", and many other challenges.MPI (Message Passing Interface) is the most important parallel program-ming tools, it has good portability and powerful, higher e?ciency in the im-plementation of a variety of advantages, but there is a wide range of free,e?cient and practical implementation version, almost all of the parallel Com-puting vendors provide support for it, this is all the other parallel program-ming environment can not be compared.MPI was produced in 1994, althoughthe time have a relatively late, but because he absorbed a variety of otherparallel environment Merits, taking into account the performance, function-ality, portability and other features, in just a few years of rapidly growingpopularity, messaging has become the standard parallel programming model,which also illustrates an aspect of life combined the advantages of MPI.Domain decomposition method borned in the early eighties of the lastcentury,it's a important method to the partial di?erential equations of nu-merical solution,based on Schawarz alternating method with a high degree of parallelism, good scalability, etc.The basic idea of domain decompositionmethod is to break down a whole region into a number of sub-region in ac-cordance with some principle,by selecting an independent of the numericalalgorithm of the promoter region, and local solutions and set up the relation-ship between the overall solution, iterative method used for each of the localsolutions into the overall solution.In the part of numerical examples,we made a comparison in parallel andserial experimental data,we found that when test time is relatively small scaleand speed up is not ideal, because of parallel programming required a lotof data transfer, and parallel programming of redundancy than a serial pro-cess, which leads to the calculation of parallel programming time consump-tion more than the serial process.However, when relatively small step and testlarger scale, the computing time is much larger than the time required fordata transfer ,many of CPUs take part in operations,result of computing timegreatly reduced, and the procedure compared to serial, the accelerate is verygood, running time is better than the serial process.
Keywords/Search Tags:Parallel Computing, Domain Decomposition, MPI
PDF Full Text Request
Related items