The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.07 - July (2001 vol.50)
pp: 709-723
ABSTRACT
<p><b>Abstract</b>—Providing adequate data bandwidth is extremely important for a future wide-issue processor to achieve its full performance potential. Adding a large number of ports to a data cache, however, becomes increasingly inefficient and can add to the hardware complexity significantly. This paper takes an alternative or complementary approach for providing more data bandwidth, called data decoupling. This paper especially studies an interesting, yet less explored, behavior of memory access instructions, called access region locality, which is concerned with each static memory instruction and its range of access locations at runtime. Our experimental study using a set of SPEC95 benchmark programs shows that most memory access instructions reference a single region at runtime. Also shown is that it is possible to accurately predict the access region of a memory instruction at runtime by scrutinizing the addressing mode of the instruction and the past access history of it. We describe and evaluate a wide-issue superscalar processor with two distinct sets of memory pipelines and caches, driven by the access region predictor. Experimental results indicate that the proposed mechanism is very effective in providing high memory bandwidth to the processor, resulting in comparable or better performance than a conventional memory design with a heavily multiported data cache that can lead to much higher hardware complexity.</p>
INDEX TERMS
Data bandwidth, data locality, instruction level parallelism, runtime stack, data stream partitioning, multiported data cache.
CITATION
Sangyeun Cho, Pen-Chung Yew, Gyungho Lee, "A High-Bandwidth Memory Pipeline for Wide Issue Processors", IEEE Transactions on Computers, vol.50, no. 7, pp. 709-723, July 2001, doi:10.1109/12.936237
23 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool