The Community for Technology Leaders
RSS Icon
Subscribe
pp: 1
Emilio Cota , Columbia University, New York
Paolo Mantovani , Columbia University, New York
Michele Petracca , Cadence Design Systems, Inc, New York
Mario Casu , Politecnico di Milano, Torino
Luca Carloni , Columbia University, New York
ABSTRACT
Accelerators integrated on-die with General-Purpose CPUs (GP-CPUs) can yield significant performance and power improvements. Their extensive use, however, is ultimately limited by their area overhead; due to their high degree of specialization, the opportunity cost of investing die real estate on accelerators can become prohibitive, especially for general-purpose architectures. In this paper we present a novel technique aimed at mitigating this opportunity cost by allowing GP-CPU cores to reuse accelerator memory as a non-uniform cache architecture (NUCA) substrate. On a system with a last level-2 cache of 128kB, our technique achieves on average a 25% performance improvement when reusing four 512 kB accelerator memory blocks to form a level-3 cache. Making these blocks reusable as NUCA slices incurs on average in a 1.89% area overhead with respect to equally-sized ad hoc cache slices.
INDEX TERMS
Accelerator architectures, Cache memory
CITATION
Emilio Cota, Paolo Mantovani, Michele Petracca, Mario Casu, Luca Carloni, "Accelerator Memory Reuse in the Dark Silicon Era", IEEE Computer Architecture Letters, , no. 2, pp. 1, RapidPosts RapidPosts, doi:10.1109/L-CA.2012.29
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool