The Community for Technology Leaders
Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques (2013)
Edinburgh, United Kingdom United Kingdom
Sept. 7, 2013 to Sept. 11, 2013
ISSN: 1089-795X
ISBN: 978-1-4799-1018-2
pp: 157-166
Onur Kayiran , Dept. of Comput. Sci. & Eng., Pennsylvania State Univ., University Park, PA, USA
Adwait Jog , Dept. of Comput. Sci. & Eng., Pennsylvania State Univ., University Park, PA, USA
Mahmut T. Kandemir , Dept. of Comput. Sci. & Eng., Pennsylvania State Univ., University Park, PA, USA
Chita R. Das , Dept. of Comput. Sci. & Eng., Pennsylvania State Univ., University Park, PA, USA
ABSTRACT
General-purpose graphics processing units (GPG-PUs) are at their best in accelerating computation by exploiting abundant thread-level parallelism (TLP) offered by many classes of HPC applications. To facilitate such high TLP, emerging programming models like CUDA and OpenCL allow programmers to create work abstractions in terms of smaller work units, called cooperative thread arrays (CTAs). CTAs are groups of threads and can be executed in any order, thereby providing ample opportunities for TLP. The state-of-the-art GPGPU schedulers allocate maximum possible CTAs per-core (limited by available on-chip resources) to enhance performance by exploiting TLP. However, we demonstrate in this paper that executing the maximum possible number of CTAs on a core is not always the optimal choice from the performance perspective. High number of concurrently executing threads might cause more memory requests to be issued, and create contention in the caches, network and memory, leading to long stalls at the cores. To reduce resource contention, we propose a dynamic CTA scheduling mechanism, called DYNCTA, which modulates the TLP by allocating optimal number of CTAs, based on application characteristics. To minimize resource contention, DYNCTA allocates fewer CTAs for applications suffering from high contention in the memory subsystem, compared to applications demonstrating high throughput. Simulation results on a 30-core GPGPU platform with 31 applications show that the proposed CTA scheduler provides 28% average improvement in performance compared to the existing CTA scheduler.
INDEX TERMS
Instruction sets, Parallel processing, Kernel, Graphics processing units, Pipelines, Transform coding, Measurement
CITATION
Onur Kayiran, Adwait Jog, Mahmut T. Kandemir, Chita R. Das, "Reshaping cache misses to improve row-buffer locality in multicore systems", Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques, vol. 00, no. , pp. 157-166, 2013, doi:10.1109/PACT.2013.6618813
305 ms
(Ver 3.3 (11022016))