The Community for Technology Leaders
Green Image
Issue No. 01 - Jan.-June (2012 vol. 11)
ISSN: 1556-6056
pp: 21-24
Stephen A. Edwards , Columbia University, New York
Lisa Wu , Columbia University, New York
Martha A. Kim , Columbia University, New York
ABSTRACT
Hardware acceleration is a widely accepted solution for performance and energy efficient computation because it removes unnecessary hardware for general computation while delivering exceptional performance via specialized control paths and execution units. The spectrum of accelerators available today ranges from coarse-grain off-load engines such as GPUs to fine-grain instruction set extensions such as SSE. This research explores the benefits and challenges of managing memory at the data-structure level and exposing those operations directly to the ISA. We call these instructions Abstract Datatype Instructions (ADIs). This paper quantifies the performance and energy impact of ADIs on the instruction and data cache hierarchies. For instruction fetch, our measurements indicate that ADIs can result in 21–48% and 16–27% reductions in instruction fetch time and energy respectively. For data delivery, we observe a 22–40% reduction in total data read/write time and 9–30% in total data read/write energy.
INDEX TERMS
Memory Structures, Cache memories, Hardware/software interfaces, Instruction fetch, Memory hierarchy
CITATION
Stephen A. Edwards, Lisa Wu, Martha A. Kim, "Cache Impacts of Datatype Acceleration", IEEE Computer Architecture Letters, vol. 11, no. , pp. 21-24, Jan.-June 2012, doi:10.1109/L-CA.2011.25
205 ms
(Ver 3.1 (10032016))