Search For:

Displaying 1-6 out of 6 total
Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Finale Doshi-Velez,David Pfau,Frank Wood,Nicholas Roy
Issue Date:November 2013
pp. 1
Making intelligent decisions from incomplete information is critical in many applications: for example, robots must choose actions based on imperfect sensors, and speech-based interfaces must infer a user's needs from noisy microphone inputs. What makes th...
 
Grounding spatial language for video search
Found in: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI '10)
By Deb Roy, George Shaw, Nicholas Roy, Stefanie Tellex, Thomas Kollar
Issue Date:November 2010
pp. 1-8
The ability to find a video clip that matches a natural language description of an event would enable intuitive search of large databases of surveillance video. We present a mechanism for connecting a spatial language query to a video clip corresponding to...
     
Toward understanding natural language directions
Found in: Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction (HRI '10)
By Deb Roy, Nicholas Roy, Stefanie Tellex, Thomas Kollar
Issue Date:March 2010
pp. 259-266
Speaking using unconstrained natural language is an intuitive and flexible way for humans to interact with robots. Understanding this kind of linguistic input is challenging because diverse words and phrases must be mapped into structures that the robot ca...
     
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Found in: Proceedings of the 25th international conference on Machine learning (ICML '08)
By Finale Doshi, Joelle Pineau, Nicholas Roy
Issue Date:July 2008
pp. 256-263
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent's knowledge and actions that increase an agent's reward. Unfortunately, most POMDPs are defined with a large nu...
     
Optimal sensor placement for agent localization
Found in: ACM Transactions on Sensor Networks (TOSN)
By Damien B. Jourdan, Nicholas Roy
Issue Date:May 2008
pp. 1-40
In this article we consider deploying a sensor network to help an agent navigate in an area. In particular the agent uses range measurements to the sensors to localize itself. We wish to place the sensors in order to provide optimal localization accuracy t...
     
Efficient model learning for dialog management
Found in: Proceeding of the ACM/IEEE international conference on Human-robot interaction (HRI '07)
By Finale Doshi, Nicholas Roy
Issue Date:March 2007
pp. 65-72
Intelligent planning algorithms such as the Partially Observable Markov Decision Process (POMDP) have succeeded in dialog management applications [10, 11, 12] because they are robust to the inherent uncertainty of human interaction. Like all dialog plannin...
     
 1