The Community for Technology Leaders
Parallel and Distributed Processing Symposium, International (2012)
Shanghai, China China
May 21, 2012 to May 25, 2012
ISSN: 1530-2075
ISBN: 978-1-4673-0975-2
pp: 557-568
ABSTRACT
Clusters of GPUs are emerging as a new computational scenario. Programming them requires the use of hybrid models that increase the complexity of the applications, reducing the productivity of programmers. We present the implementation of OmpSs for clusters of GPUs, which supports asynchrony and heterogeneity for task parallelism. It is based on annotating a serial application with directives that are translated by the compiler. With it, the same program that runs sequentially in a node with a single GPU can run in parallel in multiple GPUs either local (single node) or remote (cluster of GPUs). Besides performing a task-based parallelization, the runtime system moves the data as needed between the different nodes and GPUs minimizing the impact of communication by using affinity scheduling, caching, and by overlapping communication with the computational task. We show several applications programmed with OmpSs and their performance with multiple GPUs in a local node and in remote nodes. The results show good tradeoff between performance and effort from the programmer.
INDEX TERMS
Graphics processing unit, Runtime, Computer architecture, Kernel, Programming, Message systems, Coherence, OpenMP, Cluster programming, GPGPU computing, accelerators
CITATION

E. Ayguade et al., "Productive Programming of GPU Clusters with OmpSs," Parallel and Distributed Processing Symposium, International(IPDPS), Shanghai, China China, 2012, pp. 557-568.
doi:10.1109/IPDPS.2012.58
223 ms
(Ver 3.3 (11022016))