The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January (2000 vol.49)
pp: 1-15
ABSTRACT
<p><b>Abstract</b>—Most modern microprocessors employ one or two levels of on-chip caches in order to improve performance. Caches typically are implemented with static RAM cells and often occupy a large portion of the chip area. Not surprisingly, these caches can consume a significant amount of power. In many applications, such as portable devices, energy efficiency is more important than performance. We propose sacrificing some performance in exchange for energy efficiency by filtering cache references through an unusually small first level cache. We refer to this structure as the filter cache. A second level cache, similar in size and structure to a conventional first level cache, is positioned behind the filter cache and serves to mitigate the performance loss. Extensive experiments indicate that a small filter cache still can achieve a high hit rate and good performance. This approach allows the second level cache to be in a low power mode most of the time, thus resulting in power savings. The filter cache is particularly attractive in low power applications, such as the embedded processors used for communication and multimedia applications. For example, experimental results across a wide range of embedded applications show that a direct mapped 256-byte filter cache achieves a 58 percent power reduction while reducing performance by 21 percent. This trade-off results in a 51 percent reduction in the energy-delay product when compared to a conventional design.</p>
INDEX TERMS
Filter cache, low power, embedded processor, energy-delay, media processor.
CITATION
Johnson Kin, Munish Gupta, William H. Mangione-Smith, "Filtering Memory References to Increase Energy Efficiency", IEEE Transactions on Computers, vol.49, no. 1, pp. 1-15, January 2000, doi:10.1109/12.822560
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool