The Community for Technology Leaders
2014 23rd International Conference on Parallel Architecture and Compilation (PACT) (2014)
Edmonton, Canada
Aug. 23, 2014 to Aug. 27, 2014
ISBN: 978-1-5090-6607-0
pp: 357-368
Wei Ding , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
Mahmut Kandemir , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
Diana Guttman , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
Adwait Jog , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
Chita R. Das , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
Praveen Yedlapalli , Department of Computer Science and Engineering, The Pennsylvania State University, University Park, Pennsylvania, USA
ABSTRACT
Most of the prior compiler based data locality optimization works target exclusively cache locality optimization, and row-buffer locality in DRAM banks received much less attention. In particular, to the best of our knowledge, there is no single compiler based approach that can improve row-buffer locality in executing irregular applications. This presents a critical problem considering the fact that executing irregular applications in a power and performance efficient manner will be a key requirement to extract maximum benefits from emerging multicore machines and exascale systems. Motivated by these observations, this paper makes the following contributions. First, it presents a compiler-runtime cooperative data layout optimization approach that takes as input an irregular program that has already been optimized for cache locality and generates an output code with the same cache performance but better row-buffer locality (lower number of row-buffer misses). Second, it discusses a more aggressive strategy that sacrifices some cache performance in order to further improve row-buffer performance (i.e., it trades cache performance for memory system performance). The ultimate goal of this strategy is to find the right tradeoff point between cache performance and row-buffer performance so that the overall application performance is improved. Third, the paper performs a detailed evaluation of these two approaches using both an AMD Opteron based multicore system and a multicore simulator. The experimental results, collected using five real-world irregular applications, show that (i) conventional cache optimizations do not improve row-buffer locality significantly; (ii) our first approach achieves about 9.8% execution time improvement by keeping the number of cache misses the same as a cache-optimized code but reducing the number of row-buffer misses; and (iii) our second approach achieves even higher execution time improvements (13.8% on average) by sacrificing cache performance for additional memory performance.
INDEX TERMS
Optimization, Arrays, Multicore processing, Layout, Random access memory, Indexes, System-on-chip
CITATION
Wei Ding, Mahmut Kandemir, Diana Guttman, Adwait Jog, Chita R. Das, Praveen Yedlapalli, "Trading cache hit rate for memory performance", 2014 23rd International Conference on Parallel Architecture and Compilation (PACT), vol. 00, no. , pp. 357-368, 2014, doi:10.1145/2628071.2628082
158 ms
(Ver 3.3 (11022016))