The Community for Technology Leaders
2014 IEEE International Conference on Web Services (ICWS) (2014)
Anchorage, AK, USA
June 27, 2014 to July 2, 2014
ISBN: 978-1-4799-5053-9
pp: 447-454
ABSTRACT
In the era of big data, data intensive applications have posed new challenges to the filed of service composition, i.e. composition efficiency and scalability. How to compose massive and evolving services in such dynamic scenarios is a vital problem demanding prompt solutions. As a consequence, we propose a new model for large-scale adaptive service composition in this paper. This model integrates the knowledge of reinforcement learning aiming at the problem of adaptability in a highly-dynamic environment and game theory used to coordinate agents' behavior for a common task. In particular, a multi-agent Q-learning algorithm for service composition based on this model is also proposed. The experimental results demonstrate the effectiveness and efficiency of our approach, and show a better performance compared with the single-agent Q-learning method.
INDEX TERMS
Joints, Games, Markov processes, Quality of service, Adaptation models, Web services, Game theory
CITATION

H. Wang, Q. Wu, X. Chen, Q. Yu, Z. Zheng and A. Bouguettaya, "Adaptive and Dynamic Service Composition via Multi-agent Reinforcement Learning," 2014 IEEE International Conference on Web Services (ICWS), Anchorage, AK, USA, 2014, pp. 447-454.
doi:10.1109/ICWS.2014.70
86 ms
(Ver 3.3 (11022016))