This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Speed versus Accuracy Trade-Offs in Microarchitectural Simulations
November 2007 (vol. 56 no. 11)
pp. 1549-1563

Abstract—Due to the long simulation time of the reference input set, computer architects often use reduced time simulation techniques to shorten the simulation time. However, what has not yet been thoroughly evaluated is the accuracy of these techniques relative to the reference input set and with respect to each other. To rectify this deficiency, this paper uses three methods to characterize reduced input set, truncated execution, and sampling-based simulation techniques while also examining their speed vs. accuracy trade-off and configuration dependence. Our results show that the three sampling-based techniques, SimPoint, SMARTS, and random sampling, have the best accuracy, the best speed vs. accuracy trade-off, and the least configuration dependence. On the other hand, the reduced input set and truncated execution simulation techniques had generally poor accuracy, were not significantly faster than the sampling-based techniques, and were severely configuration dependent. The final contribution of this paper is a decision tree which can help architects choose the most appropriate technique for their simulations.

[1] B. Black and J. Shen, “Calibration of Microprocessor Performance Models,” Computer, vol. 31, no. 5, pp. 59-65, May 1998.
[2] D. Brooks, V. Tiwari, and M. Martonosi, “Wattch: A Framework for Architectural Level Power Analysis and Optimizations,” Proc. 27th Int'l Symp. Computer Architecture (ISCA '00), 2000.
[3] D. Citron, “MisSPECulation: Partial and Misleading Use of SPEC CPU2000 in Computer Architecture Conferences,” 30th Int'l Symp. Computer Architecture (ISCA '03) Panel Discussion, 2003.
[4] T. Conte, M. Hirsch, and K. Menezes, “Reducing State Loss for Effective Trace Sampling of Superscalar Processors,” Proc. Int'l Conf. Computer Design (ICCD), 1996.
[5] T. Conte and P. Bryan, personal communication, 2005.
[6] T. Conte, and P. Bryan, “Statistical Techniques for Processor and Cache Simulation,” Performance Evaluation and Benchmarking, L.K.John and L. Eeckhout, eds., chapter 6, CRC Press, 2005.
[7] R. Desikan, D. Burger, and S. Keckler, “Measuring Experimental Error in Microprocessor Simulation,” Proc. 28th Int'l Symp. Computer Architecture (ISCA '01), 2001.
[8] L. Eeckhout, H. Vandierendonck, and K. De Bosschere, “Workload Design: Selecting Representative Program-Input Pairs,” Proc. 11th Int'l Conf. Parallel Architectures and Compilation Techniques (PACT '02), 2002.
[9] J. Gibson, R. Kunz, M. Ofelt, M. Horowitz, and J. Hennessy, “FLASH vs. (Simulated) FLASH: Closing the Simulation Loop,” Proc. Ninth Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '00), 2000.
[10] I. Gómez, L. Pifiuel, M. Prieto, and F. Tirado, “Analysis of Simulation-Adapted SPEC 2000 Benchmarks,” Computer Architecture News, vol. 30, no. 4, pp. 4-10, Sept. 2002.
[11] G. Hamerly, E. Perelman, J. Lau, and B. Calder, “SimPoint 3.0: Faster and More Flexible Program Analysis,” J. Instruction Level Parallelism, Sept. 2005.
[12] J. Henning, “SPEC CPU2000: Measuring CPU Performance in the New Millennium,” Computer, vol. 33, no. 7, pp. 28-35, July 2000.
[13] A. KleinOsowski and D. Lilja, “MinneSPEC: A New SPEC Benchmark Workload for Simulation-Based Computer Architecture Research,” IEEE Computer Architecture Letters, vol. 1, June 2002.
[14] D. Lilja, Measuring Computer Performance. Cambridge Univ. Press, 2000.
[15] E. Perelman, G. Hamerly, and B. Calder, “Picking Statistically Valid and Early Simulation Points,” Proc. 12th Int'l Conf. Parallel Architectures and Compilation Techniques (PACT '03), 2003.
[16] R. Plackett and J. Burman, “The Design of Optimum Multifactorial Experiments,” Biometrika, vol. 33, no. 4, pp. 305-325, June 1946.
[17] T. Sherwood, E. Perelman, G. Hamerly, and B. Calder, “Automatically Characterizing Large Scale Program Behavior,” Proc. 10th Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '02), 2002.
[18] R. Wunderlich, T. Wenisch, B. Falsafi, and J. Hoe, “SMARTS: Accelerating Microarchitectural Simulation via Rigorous Statistical Sampling,” Proc. 13th Int'l Symp. Computer Architecture (ISCA '03), 2003.
[19] R. Wunderlich, personal communication, 2004.
[20] J. Yi, D. Lilja, and D. Hawkins, “A Statistically-Rigorous Approach for Improving Simulation Methodology,” Proc. Ninth Int'l Symp. High-Performance Computer Architecture (HPCA '03), 2003.
[21] J. Yi, S. Kodakara, R. Sendag, D. Lilja, and D. Hawkins, “Characterizing and Comparing Prevailing Simulation Techniques,” Proc. 11th Int'l Symp. High-Performance Computer Architecture (HPCA '05), 2005.

Index Terms:
Modeling of computer architecture, Measurement techniques, Modeling techniques
Citation:
Joshua J. Yi, Resit Sendag, David J. Lilja, Douglas M. Hawkins, "Speed versus Accuracy Trade-Offs in Microarchitectural Simulations," IEEE Transactions on Computers, vol. 56, no. 11, pp. 1549-1563, Nov. 2007, doi:10.1109/TC.2007.70744
Usage of this product signifies your acceptance of the Terms of Use.