The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (1995 vol.44)
pp: 609-623
ABSTRACT
<p><it>Abstract</it>—Memory latency and bandwidth are progressing at a much slower pace than processor performance. In this paper, we describe and evaluate the performance of three variations of a hardware function unit whose goal is to assist a data cache in prefetching data accesses so that memory latency is hidden as often as possible. The basic idea of the prefetching scheme is to keep track of data access patterns in a Reference Prediction Table (RPT) organized as an instruction cache. The three designs differ mostly on the timing of the prefetching. In the simplest scheme (<it>basic</it>), prefetches can be generated one iteration ahead of actual use. The <it>lookahead</it> variation takes advantage of a lookahead program counter that ideally stays one memory latency time ahead of the real program counter and that is used as the control mechanism to generate the prefetches. Finally the <it>correlated</it> scheme uses a more sophisticated design to detect patterns across loop levels.</p><p>These designs are evaluated by simulating the ten SPEC benchmarks on a cycle-by-cycle basis. The results show that 1) the three hardware prefetching schemes all yield significant reductions in the data access penalty when compared with regular caches, 2) the benefits are greater when the hardware assist augments small on-chip caches, and 3) the <it>lookahead</it> scheme is the preferred one cost-performance wise.</p>
INDEX TERMS
Prefetching, hardware function unit, reference prediction, branch prediction, data cache, cycle-by-cycle simulations.
CITATION
Jean-Loup Baer, Tien-Fu Chen, "Effective Hardware-Based Data Prefetching for High-Performance Processors", IEEE Transactions on Computers, vol.44, no. 5, pp. 609-623, May 1995, doi:10.1109/12.381947
21 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool