The Community for Technology Leaders
2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum (2012)
Shanghai, China China
May 21, 2012 to May 25, 2012
ISBN: 978-1-4673-0974-5
pp: 1136-1143
ABSTRACT
Fault-detection and prediction in HPC clusters and Cloud-computing systems are increasingly challenging issues. Several system middleware such as job schedulers and MPI implementations provide support for both reactive and proactive mechanisms to tolerate faults. These techniques rely on external components such as system logs and infrastructure monitors to provide information about hardware/software failure either through detection, or as a prediction. However, these middleware work in isolation, without disseminating the knowledge of faults encountered. In this context, we propose a light-weight multi-threaded service, namely FTB-IPMI, which provides distributed fault-monitoring using the Intelligent Platform Management Interface (IPMI) and coordinated propagation of fault information using the Fault-Tolerance Backplane (FTB). In essence, it serves as a middleman between system hardware and the software stack by translating raw hardware events to structured software events and delivering it to any interested component using a publish-subscribe framework. Fault-predictors and other decision-making engines that rely on distributed failure information can benefit from FTB-IPMI to facilitate proactive fault-tolerance mechanisms such as preemptive job migration. We have developed a fault-prediction engine within MVAPICH2, an RDMA-based MPI implementation, to demonstrate this capability. Failure predictions made by this engine are used to trigger migration of processes from failing nodes to healthy spare nodes, thereby providing resilience to the MPI application. Experimental evaluation clearly indicates that a single instance of FTB-IPMI can scale to several hundreds of nodes with a remarkably low resource-utilization footprint. A deployment of FTB-IPMI that services a cluster with 128 compute-nodes, sweeps the entire cluster and collects IPMI sensor information on CPU temperature, system voltages and fan speeds in about 0.75 seconds. The average CPU utilization of this service running on a single node is 0.35%.
INDEX TERMS
Libraries, Monitoring, Temperature sensors, Fault tolerant systems, Fault tolerance, Software, Hardware, FTB. HPC Clusters, Fault detection, coordinated fault propogation, IPMI
CITATION

R. Rajachandrasekar, X. Besseron and D. K. Panda, "Monitoring and Predicting Hardware Failures in HPC Clusters with FTB-IPMI," 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum(IPDPSW), Shanghai, China China, 2012, pp. 1136-1143.
doi:10.1109/IPDPSW.2012.139
151 ms
(Ver 3.1 (10032016))