The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—The construction of efficient parallel programs usually requires expert knowledge in the application area and a deep insight into the architecture of a specific parallel machine. Often, the resulting performance is not portable, i.e., a program that is efficient on one machine is not necessarily efficient on another machine with a different architecture. Transformation systems provide a more flexible solution. They start with a specification of the application problem and allow the generation of efficient programs for different parallel machines. The programmer has to give an exact specification of the algorithm expressing the inherent degree of parallelism and is released from the low-level details of the architecture. In this article, we propose such a transformation system with an emphasis on the exploitation of the data parallelism combined with a hierarchically organized structure of task parallelism. Starting with a specification of the maximum degree of task and data parallelism, the transformations generate a specification of a parallel program for a specific parallel machine. The transformations are based on a cost model and are applied in a predefined order, fixing the most important design decisions like the scheduling of independent multitask activations, data distributions, pipelining of tasks, and assignment of processors to task activations. We demonstrate the usefulness of the approach with examples from scientific computing.</p>
Transformation system, coordination language, task and data parallelism, hierarchical module structure, data distribution types, scientific computing, message-passing program, MPI.

T. Rauber and G. Rünger, "A Transformation Approach to Derive Efficient Parallel Implementations," in IEEE Transactions on Software Engineering, vol. 26, no. , pp. 315-339, 2000.
95 ms
(Ver 3.3 (11022016))