Jan. 28, 2013 to Jan. 30, 2013
Hiroyuki Makino , Distributed Computing Technology Project, Software Innovation Center Nippon Telegraph and Telephone Corporation Tokyo, Japan
Hadoop an implementation of Google's MapReduce, is widely used in these days for big data analysis. Yahoo Inc. operated 25 PB with 25,000 nodes in 2010. The resource management for such large number of nodes is quite difficult from the aspects of configuration, deployment and efficient resource utilization. By deploying virtual machines (VMs), Hadoop management becomes much easier. Amazon already released the Hadoop on Xen-virtualized environment as Elastic MapReduce. However, Hadoop on VM clusters degrades its performance due to the overhead of the virtualization. Thus, it is important to minimize the overhead. We build a Hadoop performance model and examine how the performance is affected by changing VM configuration, allocation of VMs over physical machines, and multiplicity of jobs. We find that performance of the I/O-intensive jobs is more sensitive to the virtualization overhead than that of CPU-intensive jobs. The performance degradation caused by the VM configuration change is 55% at most and the one caused by allocation change is 18% at most for I/O-intensive jobs. For I/O intensive jobs, the best practice is to increase the number of VMs and not to increase the number of VCPUs in a VM, to allocate VMs widely over physical servers, and to decrease the number of simultaneous executed jobs. The main factor of virtualization overhead is disk I/O shared by multiple VMs in a physical server.
virtualized environment, Hadoop performance evaluation, KVM, virtual machine, cluster
Hiroyuki Makino, "Design and performance evaluation for Hadoop clusters on virtualized environment", ICOIN, 2013, 2013 International Conference on Information Networking (ICOIN), 2013 International Conference on Information Networking (ICOIN) 2013, pp. 244-249, doi:10.1109/ICOIN.2013.6496384