The Community for Technology Leaders
2014 IEEE 7th International Conference on Cloud Computing (CLOUD) (2014)
Anchorage, AK, USA
June 27, 2014 to July 2, 2014
ISSN: 2159-6190
ISBN: 978-1-4799-5062-1
pp: 448-455
An important feature of cloud computing is its elasticity, that is, the ability to have resource capacity dynamically modified according to the current system load. Auto-scaling is challenging because it must account for two conflicting objectives: minimising system capacity available to users and maximising QoS, which typically translates to short response times. Current auto-scaling techniques are based solely on load forecasts and ignore the perception that users have from cloud services. As a consequence, providers tend to provision a volume of resources that is significantly larger than necessary to keep users satisfied. In this article, we propose a scheduling algorithm and an auto-scaling triggering technique that explore user patience in order to identify critical times when auto-scaling is needed and the appropriate volume of capacity by which the cloud platform should either extend or shrink. The proposed technique assists service providers in reducing costs related to resource allocation while keeping the same QoS to users. Our experiments show that it is possible to reduce resource-hour by up to approximately 8% compared to auto-scaling based on system utilisation.
Quality of service, Time factors, Scheduling, Resource management, Cloud computing, Scheduling algorithms

R. L. Cunha, M. D. Assuncao, C. Cardonha and M. A. Netto, "Exploiting User Patience for Scaling Resource Capacity in Cloud Services," 2014 IEEE 7th International Conference on Cloud Computing (CLOUD), Anchorage, AK, USA, 2014, pp. 448-455.
302 ms
(Ver 3.3 (11022016))