The Community for Technology Leaders
Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques (2013)
Edinburgh, United Kingdom United Kingdom
Sept. 7, 2013 to Sept. 11, 2013
ISSN: 1089-795X
ISBN: 978-1-4799-1018-2
pp: 289-298
Praveen Yedlapalli , Pennsylvania State Univ., University Park, PA, USA
Jagadish Kotra , Pennsylvania State Univ., University Park, PA, USA
Emre Kultursay , Pennsylvania State Univ., University Park, PA, USA
Mahmut Kandemir , Pennsylvania State Univ., University Park, PA, USA
Chita R. Das , Pennsylvania State Univ., University Park, PA, USA
Anand Sivasubramaniam , Pennsylvania State Univ., University Park, PA, USA
ABSTRACT
Both on-chip resource contention and off-chip latencies have a significant impact on memory requests in large-scale chip multiprocessors. We propose a memory-side prefetcher, which brings data on-chip from DRAM, but does not proactively further push this data to the cores/caches. Sitting close to memory, it avails close knowledge of DRAM state and memory channels to leverage DRAM row buffer locality and channel state to bring data (from the current row buffer) on-chip ahead of need. This not only reduces the number of off-chip accesses for demand requests, but also reduces row buffer conflicts, effectively improving DRAM access times. At the same time, our prefetcher maintains this data in a small buffer at each memory controller instead of pushing it into the caches to avoid on-chip resource contention. We show that the proposed memory-side prefetcher outperforms a state-of-the-art core-side prefetcher and an existing memory-side prefetcher. More importantly, our prefetcher can also work in tandem with the core-side prefetcher to amplify the benefits. Using a wide range of multiprogrammed and multi-threaded workloads, we show that this memory-side prefetcher provides IPC improvements of 6.2% (maximum of 33.6%), and 10% (maximum of 49.6%), on an average when running alone and when combined with a core-side prefetcher, respectively. By meeting requests midway, our solution reduces the off-chip latencies while avoiding the on-chip resource contention caused by inaccurate and ill-timed prefetches.
INDEX TERMS
Prefetching, System-on-chip, Random access memory, Accuracy, Delays, Proposals, Memory management,tree traversals, SIMD, automatic vectorization, irregular programs
CITATION
Praveen Yedlapalli, Jagadish Kotra, Emre Kultursay, Mahmut Kandemir, Chita R. Das, Anand Sivasubramaniam, "Automatic vectorization of tree traversals", Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques, vol. 00, no. , pp. 289-298, 2013, doi:10.1109/PACT.2013.6618825
267 ms
(Ver 3.3 (11022016))