The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (1998 vol.47)
pp: 509-526
ABSTRACT
<p><b>Abstract</b>—Prefetching into CPU caches has long been known to be effective in reducing the cache miss ratio, but known implementations of prefetching have been unsuccessful in improving CPU performance. The reasons for this are that prefetches interfere with normal cache operations by making cache address and data ports busy, the memory bus busy, the memory banks busy, and by not necessarily being complete by the time that the prefetched data is actually referenced. In this paper, we present extensive quantitative results of a detailed cycle-by-cycle trace-driven simulation of a uniprocessor memory system in which we vary most of the relevant parameters in order to determine when and if hardware prefetching is useful. We find that, in order for prefetching to actually improve performance, the address array needs to be double ported and the data array needs to either be double ported or fully buffered. It is also very helpful for the bus to be very wide (e.g., 16 bytes) for bus transactions to be split and for main memory to be interleaved. Under the best circumstances, i.e., with a significant investment in extra hardware, prefetching can significantly improve performance. For implementations without adequate hardware, prefetching often decreases performance.</p>
INDEX TERMS
Cache memory, prefetching, timing model, cache prefetching, CPU architecture, memory system design, CPU cache memory.
CITATION
John Tse, Alan Jay Smith, "CPU Cache Prefetching: Timing Evaluation of Hardware Implementations", IEEE Transactions on Computers, vol.47, no. 5, pp. 509-526, May 1998, doi:10.1109/12.677225
29 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool