Issue No. 12 - December (2000 vol. 49)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.895853
<p><b>Abstract</b>—Benchmarking is a widely used approach to measure computer performance. Current use of benchmarks only provides running times to describe the performance of a tested system. Glancing through these execution times provides little or no information about system strengths and weaknesses. A novel benchmarking methodology is proposed to identify key performance parameters; the methodology is based on measuring performance vectors. A performance vector is a vector of ratings that represents delivered performance of primitive operations of a system. In order to measure performance vectors, a geometric model is proposed which defines system behavior using the concepts of support points, context lattice, and operating points. In addition to the performance vector, other metrics derivable from the geometric model include the variation in system performance and the compliance of benchmarks. Using this methodology, the performance vectors of the Sun SuperSPARC (desktop workstation) and the Cray C90 (vector supercomputer) are evaluated using the SPEC benchmarks and the Perfect Club, respectively. The proposed methodology respects several practical constraints and issues in benchmarking. The instrumentation required is minimal. The benchmarks used are realistic (not synthetic) in order to reflect the delivered (not peak) performance. Finally, operations in the performance vector are not measured individually since there may be significant interplay in their executions.</p>
Computer performance evaluation, performance modeling, benchmark sets, performance vectors, superscalar processors, vector computers.
U. Krishnaswamy and I. D. Scherson, "A Framework for Computer Performance Evaluation Using Benchmark Sets," in IEEE Transactions on Computers, vol. 49, no. , pp. 1325-1338, 2000.