The Community for Technology Leaders
Grid Computing, IEEE/ACM International Workshop on (2011)
Lyon, France
Sept. 21, 2011 to Sept. 23, 2011
ISSN: 1550-5510
ISBN: 978-0-7695-4572-1
pp: 137-144
ABSTRACT
To investigate challenges of multi-tier application migration to Infrastructure-as-a-Service (IaaS) clouds we performed an experimental investigation by deploying a processor bound and input-output bound variant of the RUSLE2 erosion model to an IaaS based private cloud. Scaling the applications to achieve optimal system throughput is complex and involves much more than simply increasing the number of allotted virtual machines (VMs). While scaling the application variants a series of bottlenecks were encountered unique to an application's processing, I/O, and memory requirements, herein referred to as an application's profile. To investigate the impact of provisioning variation for hosting multi-tier applications we tested four schemes of VM deployments across the physical nodes of our cloud. Performance degradation was more pronounced when multiple I/O or CPU resource intensive application components were co-located on the same physical hardware. We investigated the virtualization overhead incurred using Kernel-based virtual machines (KVM) by deploying our application variants to both physical and virtual machines. Overhead varied based on the unique characteristics of each application's profile. We observed ~112% overhead for the input/output bound application and just ~ 10% overhead for the processor bound application. Understanding an application's profile was found to be important for optimal IaaS-based cloud migration and scaling.
INDEX TERMS
Cloud Computing, Infrastructure-as-a-Service, Kernel-based virtual machines (KVM), provisioning variation, scalability, virtualization
CITATION

J. Lyon, S. Pallickara, K. Rojas, W. Lloyd, M. Arabi and O. David, "Migration of Multi-tier Applications to Infrastructure-as-a-Service Clouds: An Investigation Using Kernel-Based Virtual Machines," 2011 12th IEEE/ACM International Conference on Grid Computing(GRID), Lyon, 2011, pp. 137-144.
doi:10.1109/Grid.2011.26
362 ms
(Ver 3.3 (11022016))