The Community for Technology Leaders
2013 IEEE Sixth International Conference on Cloud Computing (2012)
Honolulu, HI, USA USA
June 24, 2012 to June 29, 2012
ISSN: 2159-6182
ISBN: 978-1-4673-2892-0
pp: 1-8
Efficient resource management in data centers and clouds running large distributed data processing frameworks like MapReduce is crucial for enhancing the performance of hosted applications and increasing resource utilization. However, existing resource scheduling schemes in Hadoop MapReduce allocate resources at the granularity of fixed-size, static portions of nodes, called slots. In this work, we show that MapReduce jobs have widely varying demands for multiple resources, making the static and fixed-size slot-level resource allocation a poor choice both from the performance and resource utilization standpoints. Furthermore, lack of coordination in the management of multiple resources across nodes prevents dynamic slot reconfiguration, and leads to resource contention. Motivated by this, we propose MROrchestrator, a MapReduce resource Orchestrator framework, which can dynamically identify resource bottlenecks, and resolve them through fine-grained, coordinated, and on-demand resource allocations. We have implemented MROrchestrator on two 24-node native and virtualized Hadoop clusters. Experimental results with a suite of representative MapReduce benchmarks demonstrate up to 38% reduction in job completion times, and up to 25% increase in resource utilization. We further demonstrate the performance boost in existing resource managers like NGM and Mesos, when augmented with MROrchestrator.
Resource management, Dynamic scheduling, Computational modeling, Memory management, Estimation, Predictive models, Heuristic algorithms, Resource Scheduling, Cloud, MapReduce
Bikash Sharma, Ramya Prabhakar, Seung-Hwan Lim, Mahmut T. Kandemir, Chita R. Das, "MROrchestrator: A Fine-Grained Resource Orchestration Framework for MapReduce Clusters", 2013 IEEE Sixth International Conference on Cloud Computing, vol. 00, no. , pp. 1-8, 2012, doi:10.1109/CLOUD.2012.37
194 ms
(Ver 3.1 (10032016))