This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Framework for Computer Performance Evaluation Using Benchmark Sets
December 2000 (vol. 49 no. 12)
pp. 1325-1338

Abstract—Benchmarking is a widely used approach to measure computer performance. Current use of benchmarks only provides running times to describe the performance of a tested system. Glancing through these execution times provides little or no information about system strengths and weaknesses. A novel benchmarking methodology is proposed to identify key performance parameters; the methodology is based on measuring performance vectors. A performance vector is a vector of ratings that represents delivered performance of primitive operations of a system. In order to measure performance vectors, a geometric model is proposed which defines system behavior using the concepts of support points, context lattice, and operating points. In addition to the performance vector, other metrics derivable from the geometric model include the variation in system performance and the compliance of benchmarks. Using this methodology, the performance vectors of the Sun SuperSPARC (desktop workstation) and the Cray C90 (vector supercomputer) are evaluated using the SPEC benchmarks and the Perfect Club, respectively. The proposed methodology respects several practical constraints and issues in benchmarking. The instrumentation required is minimal. The benchmarks used are realistic (not synthetic) in order to reflect the delivered (not peak) performance. Finally, operations in the performance vector are not measured individually since there may be significant interplay in their executions.

[1] A.K. Agrawala, R.M. Bryant, and J.M. Mohr, “An Approach to the Workload Characterization Problem,” Computer, vol. 9, pp. 18-32, 1976.
[2] D. Bailey, E. Barszcz, L. Dagum, and H.D. Simon, “The NAS Parallel Benchmarks Results 10-94,” Technical Report NAS-94-001, NAS Systems Division, NASA Ames Research Center, Oct. 1994.
[3] D. Bailey, J. Barton, T. Lasinski, and H. Simon, “The NAS Parallel Benchmarks,” Technical Report RNR-91-002 Revision 2, NAS Systems Division, NASA Ames Research Center, Aug. 1991.
[4] T. Ball and J.L. Larus, “Optimally Profiling and Tracing Programs,” Technical Report 1031 (Revision 1), Computer Science Dept., Univ. of Wisconsin, Madision, Sept. 1991.
[5] Y. Bard, “Performance Criteria and Measurement for a Time-Sharing System,” IBM Systems J., vol. 10, no. 3, pp. 193-216, 1971.
[6] Y. Bard and K.V. Suryanarayana, “On the Structure of CP-67 Overhead,” Statistical Computer Performance Evaluation, W. Freiberger, ed., pp. 329-346, New York: Academic Press, 1972.
[7] M. Berry, D. Chen, P. Koss, D. Kuck, S. Lo, Y. Pang, L. Pointer, R. Roloff, A. Sameh, E. Clementi, S. Chin, D. Schneider, G. Fox, P. Messina, D. Walker, C. Hsuing, J. Schwarzmeier, K. Lue, S. Orzag, F. Seidl, O. Johnson, R. Goodrum, and J. Martin, “The Perfect Club Benchmarks: Effective Performance Evaluation of Supercomputers,” Int'l J. Supercomputing Applications, vol. 3, no. 3, pp. 5-40, 1989.
[8] M. Berry et al., “The Perfect Club Benchmarks: Effective Performance Evaluation of Supercomputers,” Int'l J. Supercomputing Applications, vol. 3, no. 3, pp. 5-40, 1989.
[9] M. Calzarossa and G. Serazzi, “Workload Characterization: A Survey,” Proc. IEEE, vol. 81, no. 8, pp. 1,136-1,150, Aug. 1993.
[10] R. F. Cmelik and D. Keppel, “Shade: A Fast Instruction-Set Simulator for Execution Profiling,” Proc. 1994 ACM SIGMETRICS Conf. Measurement and Modeling of Computer Systems, pp. 128-137, May 1994.
[11] CSRD Staff, “Perfect Report 2: Addendum 1,” Technical Report CSRD Report 1052, Center for Supercomputing Research and Development, Univ. of Illinois at Urbana-Champaign, Feb. 1991.
[12] CSRD Staff, “Perfect Report 2: Addendum 2,” Technical Report CSRD Report 1168, Center for Supercomputing Research and Development, Univ. of Illinois at Urbana-Champaign, Nov. 1991.
[13] U. Detert and G. Hofemann, “Cray X-MP and Y-MP Memory Performance,” Parallel Computing, vol. 17, nos. 4-5, pp. 579-590, July 1991.
[14] K.M. Dixit, “The SPEC Benchmarks,” Parallel Computing, vol. 17, nos. 10-11, pp. 1,195-1,210, Dec. 1991.
[15] J.J. Dongarra, “Performance of Various Computers Using Standard Linear Equations Software,” Technical Report CS-89-85, Computer Science Dept., Univ. of Tennessee, K noxville, 1989.
[16] A.V. Fiacco, G.P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques. New York: John Wiley and Sons, 1968.
[17] L. Geppert, “Not Your Father's CPU,” IEEE Spectrum, vol. 30, no. 12, pp. 20-23, Dec. 1993.
[18] P. Heidelberger and S. Lavenberg, “Computer Performance Evaluation Methodology,” IEEE Trans. Computers, vol. 33, no. 12, pp. 1,195-1,220, Dec. 1984.
[19] J. Hennessy and D. Patterson, Computer Architecture: A Quantitative Approach. Morgan Kaufmann, 1995.
[20] P.Y.-T. Hsu, Introduction to SHADOW, Revision A. Mountain View, Calif.: Sun Microsystems, Inc., July 1989.
[21] A. Inoue and K. Takeda, “Performance Evaluation for Various Configurations of Superscalar Processors,” Computer Architecture News, vol. 21, no. 1, pp. 4-11, Mar. 1993.
[22] R.A. Kamin III, G.B. Adams III, and P.K. Dubey, “Dynamic Trace Analysis for Analytical Modeling of Superscalar Performance,” Performance Evaluation, vol. 19, nos. 2-3, pp. 259-276, Mar. 1994.
[23] U. Krishnaswamy, “Computer Evaluations Using Performance Vectors,” technical report, Dept. of Information and Computer Science, Univ. of California, Irvine, Dec. 1995.
[24] U. Krishnaswamy and I.D. Scherson, “Micro-Architecture Evaluation Using Performance Vectors,” Proc. ACM Sigmetrics Conf. Measurement and Modeling of Computer Systems, pp. 148-159, May 1996.
[25] T.T. Kwan, B.K. Totty, and D.A. Reed, “Communication and Computation Performance of the CM-5,” Proc. Supercomputing '93, pp. 192-201, Nov. 1993.
[26] C.L. Lawson and R.J. Hanson, Solving Least Squares Problems. Englewood Cliffs, N.J.: Prentice Hall, 1974.
[27] R.L. Lee, A.Y. Kwok, and F.A. Briggs, “The Floating Point Performance of a Superscalar SPARC Processor,” SIGPLAN Notices, vol. 26, no. 4, pp. 28-37, Apr. 1991.
[28] T. Manley and H. Grossman, “Window Overflow Reduction for SPARC Processors,” Proc. 31st Ann. Southeast Conf., pp. 56-64, 1994.
[29] L. McMahan and R. Lee, “Pathlengths of SPEC Benchmarks for PA-RISC, MIPS, and SPARC,” Digest of Papers COMPCON Spring '93, pp. 481-490, Feb. 1993.
[30] MIPS Computer Systems, Inc., MIPS Languages and Programmers's Manual, 1986.
[31] A. Nanda and L.M. Ni, “Benchmark Workload Generation and Performance Characterization of Multiprocessors,” Proc. Supercomputing '92, pp. 20-29, Nov. 1992.
[32] D.B. Noonburg and J.P. Shen, “Theoretical Modeling of Superscalar Processor Performance,” Proc. 27th Ann. Int'l Symp. Microarchitecture MICRO 27, pp. 52-62, Dec. 1994.
[33] R.W. Numrich, P.L. Springer, and J.C. Peterson, “Measurement of Communication Rates on the Cray T3D Interprocessor Network,” High-Performance Computing and Networking, W. Gentzsch and U. Harms, eds., pp. 150-157, Berlin: Springer-Verlag, 1994.
[34] W. Oed, “Y-MP C90: System Features and Early Benchmark Results,” Parallel Computing, vol. 18, no. 8, pp. 947-954, Aug. 1992.
[35] W. Oed personal communications, Cray Research, GmbH, München, Germany, 1995.
[36] L. Pointer, “Perfect Report 2,” Technical Report CSRD Report 964, Center for Supercomputing Research and Development, Univ. of Illinois at Urbana-Champaign, Mar. 1990.
[37] D.A. Reed et al., "An Overview of the Pablo Performance Analysis Environment," Proc. Scalable Parallel Libraries Conf., IEEE Computer Society Press, Los Alamitos, Calif., Oct. 1994, pp. 104-113.
[38] K.A. Robbins and S. Robbins, “Dynamic Behavior of Memory Reference Streams for the Perfect Club Benchmarks,” Proc. Int'l Conf. Parallel Processing, pp. I-48-52, 1992.
[39] R.H. Saavedra-Barrera and A.J. Smith, “Analysis of Benchmark Characteristics and Benchmark Performance Prediction,” Technical Report USC-CS-92-524, Univ. of Southern California, Los Angeles, Sept. 1992.
[40] R.H. Saavedra-Barrera, A.J. Smith, and E. Miya, “Machine Characterization Based on an Abstract High-Level Language Machine,” IEEE Trans. Computers, vol. 38, no. 12, pp. 1,659-1,679, Dec. 1989.
[41] M.J. Serrano, W. Yamamoto, R.C. Wood, and M. Nemirovsky, “A Model for Performance Estimation in a Multistreamed Superscalar Processor,” Computer Performance Evaluation: Modelling Techniques and Tools, G. Haring and G. Kotsis, eds., pp. 213-230, Berlin: Springer-Verlag, 1994.
[42] P. Sinvhal-Sharma, “Perfect Benchmarks™Documentation Suite 1,” Center for Supercomputing Research and Development, Univ. of Illinois, Urbana-Champaign, Sept. 1991.
[43] SPEC, SPEC Newsletter, June 1994.
[44] G.W. Stewart, Introduction to Matrix Computations. New York: Academic Press, 1973.
[45] G. Strang, Linear Algebra and Its Applications, third ed. San Diego, Calif.: Harcourt Brace Jova novich, 1988.
[46] Sun Microsystems, Inc., The SuperSPARC User's Guide, Part No. 801-4272-01, year?
[47] Sun Microsystems, Inc., The SPARC Architecture Manual, Version 8, Part No. 800-1399-09, Aug. 1989.
[48] D. Tabak, Advanced Microprocessors, second ed. New York: McGraw-Hill, 1885.
[49] S. Vajapeyam, G.S. Sohi, and W.-C. Hsu, “An Empirical Study of the Cray Y-MP Using the Perfect Club Benchmarks,” Proc. 18th Int'l Symp. Computer Architecture, pp. 170-179, 1991.
[50] S. Wallace and N. Bagherzadeh, “Performance Issues of a Superscalar Microprocessor,” Microprocessors and Microsystems, vol. 19, no. 4, pp. 187-199, May 1995.

Index Terms:
Computer performance evaluation, performance modeling, benchmark sets, performance vectors, superscalar processors, vector computers.
Citation:
Umesh Krishnaswamy, Isaac D. Scherson, "A Framework for Computer Performance Evaluation Using Benchmark Sets," IEEE Transactions on Computers, vol. 49, no. 12, pp. 1325-1338, Dec. 2000, doi:10.1109/12.895853
Usage of this product signifies your acceptance of the Terms of Use.