Cluster Computing and the Grid, IEEE International Symposium on (2010)
Melbourne, VIC, Australia
May 17, 2010 to May 20, 2010
MapReduce is a programming paradigm for parallel processing that is increasingly being used for data-intensive applications in cloud computing environments. An understanding of the characteristics of workloads running in MapReduce environments benefits both the service providers in the cloud and users: the service provider can use this knowledge to make better scheduling decisions, while the user can learn what aspects of their jobs impact performance. This paper analyzes 10-months of MapReduce logs from the M45 supercomputing cluster which Yahoo! made freely available to select universities for academic research. We characterize resource utilization patterns, job patterns, and sources of failures. We use an instance-based learning technique that exploits temporal locality to predict job completion times from historical data and identify potential performance problems in our dataset.
MapReduce, Workload characterization, Distributed systems
S. Kavulya, P. Narasimhan, R. Gandhi and J. Tan, "An Analysis of Traces from a Production MapReduce Cluster," Cluster Computing and the Grid, IEEE International Symposium on(CCGRID), Melbourne, VIC, Australia, 2010, pp. 94-103.