The Community for Technology Leaders
2014 23rd International Conference on Parallel Architecture and Compilation (PACT) (2014)
Edmonton, Canada
Aug. 23, 2014 to Aug. 27, 2014
ISBN: 978-1-5090-6607-0
pp: 467-468
Javier Cabezas , Barcelona Supercomputing Center
Lluis Vilanova , Barcelona Supercomputing Center
Isaac Geladeno , NVIDIA Corporation
Thomas B. Jablin , University of Illinois
Nacho Navarro , Barcelona Supercomputing Center
Wen-mei Hwu , University of Illinois
ABSTRACT
We present AMGE, a programming framework and runtime system to decompose data and GPU kernels and execute them on multiple GPUs concurrently. AMGE exploits the remote memory access capability of recent GPUs to guarantee data accessibility regardless of its physical location, thus allowing AMGE to safely decompose and distribute arrays across GPU memories. AMGE also includes a compiler analysis to detect array access patterns in GPU kernels. The runtime uses this information to automatically choose the best computation and data distribution configuration. Through effective use of GPU caches, AMGE achieves good scalability in spite of the limited interconnect bandwidth between GPUs. Results show 1.95× and 3.73× execution speedups for 2 and 4 GPUs for a wide range of dense computations compared to the original versions on a single GPU.
INDEX TERMS
Graphics processing units, Kernel, Arrays, Runtime, Programming, Matrix decomposition
CITATION
Javier Cabezas, Lluis Vilanova, Isaac Geladeno, Thomas B. Jablin, Nacho Navarro, Wen-mei Hwu, "Automatic execution of single-GPU computations across multiple GPUs", 2014 23rd International Conference on Parallel Architecture and Compilation (PACT), vol. 00, no. , pp. 467-468, 2014, doi:10.1145/2628071.2628109
82 ms
(Ver 3.3 (11022016))