Issue No. 01 - First Quarter (2013 vol. 6)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TSC.2011.41
Ying Song , Chinese Academy of Sciences, Beijing
Yuzhong Sun , Chinese Academy of Sciences, Beijing
Weisong Shi , Wayne State University, Detroit
In a shared virtual computing environment, dynamic load changes as well as different quality requirements of applications in their lifetime give rise to dynamic and various capacity demands, which results in lower resource utilization and application quality using the existing static resource allocation. Furthermore, the total required capacities of all the hosted applications in current enterprise data centers, for example, Google, may surpass the capacities of the platform. In this paper, we argue that the existing techniques by turning on or off servers with the help of virtual machine (VM) migration is not enough. Instead, finding an optimized dynamic resource allocation method to solve the problem of on-demand resource provision for VMs is the key to improve the efficiency of data centers. However, the existing dynamic resource allocation methods only focus on either the local optimization within a server or central global optimization, limiting the efficiency of data centers. We propose a two-tiered on-demand resource allocation mechanism consisting of the local and global resource allocation with feedback to provide on-demand capacities to the concurrent applications. We model the on-demand resource allocation using optimization theory. Based on the proposed dynamic resource allocation mechanism and model, we propose a set of on-demand resource allocation algorithms. Our algorithms preferentially ensure performance of critical applications named by the data center manager when resource competition arises according to the time-varying capacity demands and the quality of applications. Using Rainbow, a Xen-based prototype we implemented, we evaluate the VM-based shared platform as well as the two-tiered on-demand resource allocation mechanism and algorithms. The experimental results show that Rainbow without dynamic resource allocation (Rainbow-NDA) provides 26 to 324 percent improvements in the application performance, as well as 26 percent higher average CPU utilization than traditional service computing framework, in which applications use exclusive servers. The two-tiered on-demand resource allocation further improves performance by 9 to 16 percent for those critical applications, 75 percent of the maximum performance improvement, introducing up to 5 percent performance degradations to others, with 1 to 5 percent improvements in the resource utilization in comparison with Rainbow-NDA.
Resource management, Servers, Dynamic scheduling, Optimization, Heuristic algorithms, Algorithm design and analysis, Data models, model, Data centers, virtual machines, on-demand resource allocation, optimization, algorithm
Y. Sun, W. Shi and Y. Song, "A Two-Tiered On-Demand Resource Allocation Mechanism for VM-Based Data Centers," in IEEE Transactions on Services Computing, vol. 6, no. , pp. 116-129, 2013.