• Publication
  • PrePrints
  • Abstract - Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning
 This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning
PrePrint
ISSN: 0162-8828
Finale Doshi-Velez, MIT, Cambridge
David Pfau, Columbia University, New York
Frank Wood, University of Oxford, Oxford
Nicholas Roy, MIT, Cambridge
Making intelligent decisions from incomplete information is critical in many applications: for example, robots must choose actions based on imperfect sensors, and speech-based interfaces must infer a user's needs from noisy microphone inputs. What makes these tasks hard is that often we do not have a natural representation with which to model the domain and use for choosing actions; we must learn about the domain's properties while simultaneously performing the task. Learning a representation also involves trade-offs between modeling the data that we have seen previously and being able to make predictions about new data. This article explores learning representations of stochastic systems using Bayesian nonparametric statistics. Bayesian nonparametric methods allow the sophistication of a representation to scale gracefully with the complexity in the data. Our main contribution is a careful empirical evaluation of how representations learned using Bayesian nonparametric methods compare to other standard learning approaches, especially in support of planning and control. We show that the Bayesian aspects of the methods result in achieving state-of-the-art performance in decision making with relatively few samples, while the nonparametric aspects often result in fewer computations. These results hold across a variety of different techniques for choosing actions given a representation.
Index Terms:
History,Hidden Markov models,Bayes methods,Computational modeling,Learning (artificial intelligence),Markov processes,Knowledge representation,HDP-HMM,Reinforcement Learning,POMDP
Citation:
Finale Doshi-Velez, David Pfau, Frank Wood, Nicholas Roy, "Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, 20 Nov. 2013. IEEE computer Society Digital Library. IEEE Computer Society, <http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.191>
Usage of this product signifies your acceptance of the Terms of Use.