The Community for Technology Leaders
2017 46th International Conference on Parallel Processing (ICPP) (2017)
Bristol, United Kingdom
Aug. 14, 2017 to Aug. 17, 2017
ISSN: 2332-5690
ISBN: 978-1-5386-1042-8
pp: 523-532
ABSTRACT
Optimizing the performance of GPU kernels is challenging for both human programmers and code generators. For example, CUDA programmers must set thread and block parameters for a kernel, but might not have the intuition to make a good choice. Similarly, compilers can generate working code, but may miss tuning opportunities by not targeting GPU models or performing code transformations. Although empirical autotuning addresses some of these challenges, it requires extensive experimentation and search for optimal code variants. This research presents an approach for tuning CUDA kernels based on static analysis that considers fine-grained code structure and the specific GPU architecture features. Notably, our approach does not require any program runs in order to discover near-optimal parameter settings. We demonstrate the applicability of our approach in enabling code autotuners such as Orio to produce competitive code variants comparable with empirical-based methods, without the high cost of experiments.
INDEX TERMS
Graphics processing units, Instruction sets, Kernel, Registers, Computer architecture, Measurement, Hardware
CITATION

R. Lim, B. Norris and A. Malony, "Autotuning GPU Kernels via Static and Predictive Analysis," 2017 46th International Conference on Parallel Processing (ICPP), Bristol, United Kingdom, 2017, pp. 523-532.
doi:10.1109/ICPP.2017.61
94 ms
(Ver 3.3 (11022016))