The Community for Technology Leaders
Green Image
ISSN: 1556-6056
Emilio Cota , Columbia University, New York
Paolo Mantovani , Columbia University, New York
Michele Petracca , Cadence Design Systems, Inc, New York
Mario Casu , Politecnico di Milano, Torino
Luca Carloni , Columbia University, New York
Accelerators integrated on-die with General-Purpose CPUs (GP-CPUs) can yield significant performance and power improvements. Their extensive use, however, is ultimately limited by their area overhead; due to their high degree of specialization, the opportunity cost of investing die real estate on accelerators can become prohibitive, especially for general-purpose architectures. In this paper we present a novel technique aimed at mitigating this opportunity cost by allowing GP-CPU cores to reuse accelerator memory as a non-uniform cache architecture (NUCA) substrate. On a system with a last level-2 cache of 128kB, our technique achieves on average a 25% performance improvement when reusing four 512 kB accelerator memory blocks to form a level-3 cache. Making these blocks reusable as NUCA slices incurs on average in a 1.89% area overhead with respect to equally-sized ad hoc cache slices.
Accelerator architectures, Cache memory
Emilio Cota, Paolo Mantovani, Michele Petracca, Mario Casu, Luca Carloni, "Accelerator Memory Reuse in the Dark Silicon Era", IEEE Computer Architecture Letters, vol. , no. , pp. 0, 5555, doi:10.1109/L-CA.2012.29
94 ms
(Ver 3.3 (11022016))