The Community for Technology Leaders
Green Image
Issue No. 04 - October-December (2010 vol. 7)
ISSN: 1545-5971
pp: 337-351
Garth A. Gibson , Carnegie Mellon University, Pittsburgh
Bianca Schroeder , University of Toronto, Toronto
ABSTRACT
Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations are publicly available. This paper analyzes failure data collected at two large high-performance computing sites. The first data set has been collected over the past nine years at Los Alamos National Laboratory (LANL) and has recently been made publicly available. It covers 23,000 failures recorded on more than 20 different systems at LANL, mostly large clusters of SMP and NUMA nodes. The second data set has been collected over the period of one year on one large supercomputing system comprising 20 nodes and more than 10,000 processors. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find, for example, that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution.
INDEX TERMS
Large-scale systems, high-performance computing, supercomputing, reliability, failures, node outages, field study, empirical study, repair time, time between failures, root cause.
CITATION
Garth A. Gibson, Bianca Schroeder, "A Large-Scale Study of Failures in High-Performance Computing Systems", IEEE Transactions on Dependable and Secure Computing, vol. 7, no. , pp. 337-351, October-December 2010, doi:10.1109/TDSC.2009.4
105 ms
(Ver )