Issue No.01 - Jan.-June (2012 vol.11)
Lisa Wu , Columbia University, New York
Martha A. Kim , Columbia University, New York
Stephen A. Edwards , Columbia University, New York
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/L-CA.2011.25
Hardware acceleration is a widely accepted solution for performance and energy efficient computation because it removes unnecessary hardware for general computation while delivering exceptional performance via specialized control paths and execution units. The spectrum of accelerators available today ranges from coarse-grain off-load engines such as GPUs to fine-grain instruction set extensions such as SSE. This research explores the benefits and challenges of managing memory at the data-structure level and exposing those operations directly to the ISA. We call these instructions Abstract Datatype Instructions (ADIs). This paper quantifies the performance and energy impact of ADIs on the instruction and data cache hierarchies. For instruction fetch, our measurements indicate that ADIs can result in 21–48% and 16–27% reductions in instruction fetch time and energy respectively. For data delivery, we observe a 22–40% reduction in total data read/write time and 9–30% in total data read/write energy.
Memory Structures, Cache memories, Hardware/software interfaces, Instruction fetch, Memory hierarchy
Lisa Wu, Martha A. Kim, Stephen A. Edwards, "Cache Impacts of Datatype Acceleration", IEEE Computer Architecture Letters, vol.11, no. 1, pp. 21-24, Jan.-June 2012, doi:10.1109/L-CA.2011.25