Top500 versus sustained performance - the top problems with the TOP500 list - and what to do about them
2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT) (2012)
Minneapolis, MN, USA
Sept. 19, 2012 to Sept. 23, 2012
DOI Bookmark: http://doi.ieeecomputersociety.org/
William Kramer , National Center for Supercomputing Applications, University of Illinois, 1205 W Clark Street, Urbana, 61821, USA
A popular U.S. talk show host uses “top 10” lists to critique events and culture every night. Our HPC industry is captivated by another list, the TOP500 list, as a way to track HPC systems' performance based on FLOPS/S assessed by a single, long-lived benchmark—Linpack. The TOP500 list has grown in influence because of its value as a marketing tool. It simplistically, but unrealistically, describes performance of HPC systems. The proponents have advocated for the TOP500 list for different reasons at different times. This paper critiques the Top 10 problems with the TOP500 list and provides suggestions on how to correct those shortcomings. It discusses why the TOP500 list is limiting the impact of HPC systems on real problems and other metrics that may be more meaningful and useful to represent the real effectiveness and value of HPC systems.
Benchmark testing, Measurement, Computers, Linear algebra, Computer architecture, System performance, Algorithm design and analysis
W. Kramer, "Top500 versus sustained performance - the top problems with the TOP500 list - and what to do about them," 2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT), Minneapolis, MN, USA, 2012, pp. 223-230.