The Community for Technology Leaders
Green Image
Issue No. 02 - July-Dec. (2012 vol. 11)
ISSN: 1556-6056
pp: 41-44
Paul V. Gratz , University of Texas at Austin, Austin
Daniel A. Jimenez , UT San Antonio Rutgers University, San Antonio Piscataway
Reena Panda , Texas A&M University, College Station
Computer architecture is beset by two opposing trends. Technology scaling and deep pipelining have led to high memory access latencies; meanwhile, power and energy considerations have revived interest in traditional in-order processors. In-order processors, unlike their superscalar counterparts, do not allow execution to continue around data cache misses. In-order processors, therefore, suffer a greater performance penalty in the light of the current high memory access latencies. Memory prefetching is an established technique to reduce the incidence of cache misses and improve performance. In this paper, we introduce B-Fetch, a new technique for data prefetching which combines branch prediction based lookahead deep path speculation with effective address speculation, to efficiently improve performance in in-order processors. Our results show that B-Fetch improves performance 38.8% on SPEC CPU2006 benchmarks, beating a current, state-of-the-art prefetcher design at ~ 1/3 the hardware overhead.
Prefetching, Registers, Process control, Benchmark testing, Computer architecture, Cache memory, Value Prediction, Prefetching, Registers, Pipelines, Benchmark testing, Computer architecture, Hardware, In-order Processors, Data Cache Prefetching, Memory Systems, Branch Prediction
Paul V. Gratz, Daniel A. Jimenez, Reena Panda, "B-Fetch: Branch Prediction Directed Prefetching for In-Order Processors", IEEE Computer Architecture Letters, vol. 11, no. , pp. 41-44, July-Dec. 2012, doi:10.1109/L-CA.2011.33
192 ms
(Ver 3.3 (11022016))