Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques (2013)
Edinburgh, United Kingdom United Kingdom
Sept. 7, 2013 to Sept. 11, 2013
Vineeth Mekkat , Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA
Anup Holey , Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA
Pen-Chung Yew , Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA
Antonia Zhai , Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA
Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications' share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU's tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores.
Graphics processing units, Multicore processing, Sensitivity, Instruction sets, Benchmark testing, Runtime
V. Mekkat, A. Holey, Pen-Chung Yew and A. Zhai, "Building expressive, area-efficient coherence directories," Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques(PACT), Edinburgh, United Kingdom United Kingdom, 2013, pp. 225-234.