Font Size: a A A

Design And Implementation Of Compiler Directives For Tasks Parallelism In Message Passing Computing

Posted on:2013-10-12Degree:MasterType:Thesis
Country:ChinaCandidate:M Y HuangFull Text:PDF
GTID:2248330362963685Subject:Software engineering
Abstract/Summary:PDF Full Text Request
The rapid development of computer science and technology brings in the rapidgrowth in demand for computing power, which cannot be sufficiently supplied by acommon desktop PC now. The solution to this problem is twofold. On one hand,researchers have been building a variety of supercomputers, advancing and enhancingtheir architectures. On the other hand, programmers need extra study and codesrefactoring to use the new computing resource well, and this burdened them badly.Researchers have also advanced many parallel programming frameworks andprogramming methods aiming to ease the burdens mentioned above. But currentworks still exist some disadvantages. For example, using programming frameworksneed huge changes to serial version of codes. As a result, this not only bringsdifficulties in maintaining codes, but also leaves little space for programmers tooptimize or to fine tune their code. Another widely used manner OpenMP has itsshortages either. It can only be used in shared memory address and failed to adapt todistributed memory address, which is very common in supercomputers.MPI is the de facto standard of message passing computing. After researchingthe limitations of current works, this paper designed and implemented a series ofcompiler directives and the corresponding source-to-source compiler dealing withtasks parallelism in message passing programming. Serial codes with the pattern ofworkgroup, pipeline or loop parallel can be parallelized by the presented compilerdirectives.Following is the procedure of the presented tool:1) Scanning the source code,returning tokens of directives;2) Parsing the tokens and generating work lists;3) Using pipe-and-filter structure to process the code, generating corresponding parallelcode, and;4) Optimizing the generated parallelize code. Flex and Bison are used togenerate scanner and parser in the scanning and parsing process in this paper.The experiment and evaluation includes3cases: EP program of the NAS-NPBbenchmark suite, common matrix multiplication and an image stylization program.Experiment results demonstrate that parallel programs generated by the presentedcompiler directives and tool can achieve a good acceleration ratio with few codechanges to the original serial version.
Keywords/Search Tags:compiler directives, parallel programming, message passingprogramming, code generation, distributed memory address
PDF Full Text Request
Related items