The Community for Technology Leaders
Machine Learning and Applications, Fourth International Conference on (2010)
Washington, D.C., USA
Dec. 12, 2010 to Dec. 14, 2010
ISBN: 978-0-7695-4300-0
pp: 113-118
Policy Gradients with Parameter-based Exploration (PGPE) is a novel model-free reinforcement learning method that alleviates the problem of high-variance gradient estimates encountered in normal policy gradient methods. It has been shown to drastically speed up convergence for several large-scale reinforcement learning tasks. However the independent normal distributions used by PGPE to search through parameter space are inadequate for some problems with multimodal reward surfaces. This paper extends the basic PGPE algorithm to use multimodal mixture distributions for each parameter, while remaining efficient. Experimental results on the Rastrigin function and the inverted pendulum benchmark demonstrate the advantages of this modification, with faster convergence to better optima.
Policy Gradients, Multi-Modal, Optimization, Parameter Exploration

C. Osendorfer, J. Schmidhuber, A. Graves and F. Sehnke, "Multimodal Parameter-exploring Policy Gradients," Machine Learning and Applications, Fourth International Conference on(ICMLA), Washington, D.C., USA, 2010, pp. 113-118.
91 ms
(Ver 3.3 (11022016))