The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.10 - October (2004 vol.53)
pp: 1274-1290
ABSTRACT
The demand for higher computing power and, thus, more on-chip computing resources is ever increasing. The size of on--chip cache memory has also been consistently increasing to keep up with developments in implementation technology. However, some applications may not utilize full cache capacity and, on the contrary, require more computing resources. To efficiently utilize silicon real-estate on the chip, we exploit the possibility of using a part of cache memory for computational purposes to strike a balance in the usage of memory and computing resources for various applications. In an earlier part of our work, the idea of Adaptive Balanced Computing (ABC) architecture was evolved, where a module of an L1 data cache is used as a coprocessor controlled by main processor. A part of an L1 data cache is designed as a Reconfigurable Functional Cache (RFC) that can be configured to perform a selective core function in the media application whenever such computing capability is required. ABC architecture provides speedups ranging from 1.04x to 5.0x for various media applications. In this paper, we show that a reduced number of cache accesses and lesser utilization of other on-chip resources, due to a significant reduction in execution time of application, will result in power savings. For this purpose, the paper first develops a model to compute the power consumed by the RFC while accelerating the computation of multimedia applications. The results show that up to a 60 percent reduction in power consumption is achieved for MPEG decoding and a reduction in the range of 10 to 20 percent for various other multimedia applications. Besides, beyond the discussions in earlier work on ABC architecture, this paper presents a detailed circuit level implementation of the core functions in the RFC modules. Further, in this paper, we go much further and study the impact of converting the conventional cache into RFC on both access time and energy consumption. The analysis is performed on a wide spectrum of cache organizations with size varying from 8KB to 256KB for varying set associativity.
INDEX TERMS
On-chip data cache, adaptive computing, multimedia processing, cache access time, cache energy dissipation.
CITATION
Rama Sangireddy, Huesung Kim, Arun K. Somani, "Low-Power High-Performance Reconfigurable Computing Cache Architectures", IEEE Transactions on Computers, vol.53, no. 10, pp. 1274-1290, October 2004, doi:10.1109/TC.2004.80
27 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool