2014 IEEE International Conference on Web Services (ICWS) (2014)
Anchorage, AK, USA
June 27, 2014 to July 2, 2014
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICWS.2014.70
In the era of big data, data intensive applications have posed new challenges to the filed of service composition, i.e. composition efficiency and scalability. How to compose massive and evolving services in such dynamic scenarios is a vital problem demanding prompt solutions. As a consequence, we propose a new model for large-scale adaptive service composition in this paper. This model integrates the knowledge of reinforcement learning aiming at the problem of adaptability in a highly-dynamic environment and game theory used to coordinate agents' behavior for a common task. In particular, a multi-agent Q-learning algorithm for service composition based on this model is also proposed. The experimental results demonstrate the effectiveness and efficiency of our approach, and show a better performance compared with the single-agent Q-learning method.
Joints, Games, Markov processes, Quality of service, Adaptation models, Web services, Game theory
H. Wang, Q. Wu, X. Chen, Q. Yu, Z. Zheng and A. Bouguettaya, "Adaptive and Dynamic Service Composition via Multi-agent Reinforcement Learning," 2014 IEEE International Conference on Web Services (ICWS), Anchorage, AK, USA, 2014, pp. 447-454.