This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
B-Fetch: Branch Prediction Directed Prefetching for In-Order Processors
July-Dec. 2012 (vol. 11 no. 2)
pp. 41-44
Reena Panda, Texas A&M University, College Station
Paul V. Gratz, University of Texas at Austin, Austin
Daniel A. Jimenez, UT San Antonio Rutgers University, San Antonio Piscataway
Computer architecture is beset by two opposing trends. Technology scaling and deep pipelining have led to high memory access latencies; meanwhile, power and energy considerations have revived interest in traditional in-order processors. In-order processors, unlike their superscalar counterparts, do not allow execution to continue around data cache misses. In-order processors, therefore, suffer a greater performance penalty in the light of the current high memory access latencies. Memory prefetching is an established technique to reduce the incidence of cache misses and improve performance. In this paper, we introduce B-Fetch, a new technique for data prefetching which combines branch prediction based lookahead deep path speculation with effective address speculation, to efficiently improve performance in in-order processors. Our results show that B-Fetch improves performance 38.8% on SPEC CPU2006 benchmarks, beating a current, state-of-the-art prefetcher design at ~ 1/3 the hardware overhead.
Index Terms:
Prefetching,Registers,Process control,Benchmark testing,Computer architecture,Cache memory,Value Prediction,Prefetching,Registers,Pipelines,Benchmark testing,Computer architecture,Hardware,In-order Processors,Data Cache Prefetching,Memory Systems,Branch Prediction
Citation:
Reena Panda, Paul V. Gratz, Daniel A. Jimenez, "B-Fetch: Branch Prediction Directed Prefetching for In-Order Processors," IEEE Computer Architecture Letters, vol. 11, no. 2, pp. 41-44, July-Dec. 2012, doi:10.1109/L-CA.2011.33
Usage of this product signifies your acceptance of the Terms of Use.