The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2005 vol.16)
pp: 175-182
ABSTRACT
<p><b>Abstract</b>—The complexity of modern computer systems may enable minor variations in performance evaluation procedures to actually determine the outcome. Our case study concerns the comparison of two parallel job schedulers, using different workloads and metrics. It shows that metrics may be sensitive to different job classes, and not measure the performance of the whole workload in an impartial manner. Workload models may implicitly assume that some workload attribute is unimportant and does not warrant modeling; this too can turn out to be wrong. As such effects are hard to predict, a careful experimental methodology is needed in order to find and verify them.</p>
INDEX TERMS
Performance evaluation, sensitivity of results, experimental verification, simulation, parallel job scheduling, backfilling.
CITATION
Dror G. Feitelson, "Experimental Analysis of the Root Causes of Performance Evaluation Results: A Backfilling Case Study", IEEE Transactions on Parallel & Distributed Systems, vol.16, no. 2, pp. 175-182, February 2005, doi:10.1109/TPDS.2005.18
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool