The Community for Technology Leaders
RSS Icon
Subscribe
San Diego, CA, USA
Sept. 20, 2004 to Sept. 23, 2004
ISBN: 0-7803-8694-9
pp: 371-377
M. Sottile , Adv. Comput. Lab., Los Alamos Nat. Lab., NM, USA
R. Minnich , Adv. Comput. Lab., Los Alamos Nat. Lab., NM, USA
ABSTRACT
Microbenchmarks, i.e. very small computational kernels, have become commonly used for quantitative measures of node performance in clusters. For example, a commonly used benchmark measures the amount of time required to perform a fixed quantum of work. Unfortunately, this benchmark is one of many that violate well known rules from sampling theory, leading to erroneous, contradictory or misleading results. At a minimum, these types of benchmarks can not be used to identify time-based activities that may interfere with and hence limit application performance. Our original and primary goal remains to identify noise in the system due to periodic activities that are not part of user application code. We discuss why the 'fixed quantum of work' benchmark provides data that is of limited use for analysis; and we show code for, discuss, and analyze results from a microbenchmark which follows good rules of sampling hygiene, and hence provides useful data for analysis.
CITATION
M. Sottile, R. Minnich, "Analysis of microbenchmarks for performance tuning of clusters", CLUSTER, 2004, 2013 IEEE International Conference on Cluster Computing (CLUSTER), 2013 IEEE International Conference on Cluster Computing (CLUSTER) 2004, pp. 371-377, doi:10.1109/CLUSTR.2004.1392636
48 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool