The Community for Technology Leaders
Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference on (2009)
Milan, Italy
Sept. 15, 2009 to Sept. 18, 2009
ISBN: 978-0-7695-3801-3
pp: 28-35
Large-scale simulation studies are necessary to study the learning behaviour of individual agents and the overall system dynamics. One reason is that planning algorithms to find optimal solutions to fully observable general decentralised Markov decision problems do not admit to polynomial-time worst-case complexity bounds. Additionally, agent interaction often implies a non-stationary environment which does not lend itself to asymptotically greedy policies. Therefore, policies with a constant level of exploration are required to be able to adapt continuously. This paper casts the application domain of distributed task assignment into the formalisms of queueing theory, complex networks and decentralised Markov decision problems to analyse the impact of the momentum of a standard back-propagation neural network function approximator and the discount factor of $SARSA(0)$ reinforcement learning and the $\epsilon$ parameter of the $\epsilon$-greedy policy. For this purpose large queueing networks of one thousand interacting agents are evolved. A Kriging metamodel is fitted and in combination with simulated annealing optimal operating conditions with respect to the total average response time are found. The insights gained from this study are significant in that they provide guidance in deploying large-scale distributed task assignment systems modelled as multi-agent queueing networks.
Queueing Networks, Markov Decision Problem, Multi-agent Reinforcement Learning, Kriging

W. Harrison and D. Dahlem, "Globally Optimal Multi-agent Reinforcement Learning Parameters in Distributed Task Assignment," 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Milan, Italy, 2009, pp. 28-35.
89 ms
(Ver 3.3 (11022016))