Redondo Beach, California
Nov. 12, 2000 to Nov. 14, 2000
P. Auer , Inst. for Theor. Comput. Sci., Graz Univ. of Technol., Austria
We show how a standard tool from statistics, namely confidence bounds, can be used to elegantly deal with situations which exhibit an exploitation/exploration trade-off. Our technique for designing and analyzing algorithms for such situations is very general and can be applied when an algorithm has to make exploitation-versus-exploration decisions based on uncertain information provided by a random process. We consider two models with such an exploitation/exploration trade-off. For the adversarial bandit problem our new algorithm suffers only O/spl tilde/(T/sup 1/2/) regret over T trials which improves significantly over the previously best O/spl tilde/(T/sup 2/3/) regret. We also extend our results for the adversarial bandit problem to shifting bandits. The second model we consider is associative reinforcement learning with linear value functions. For this model our technique improves the regret from O/spl tilde/(T/sup 3/4/) to O/spl tilde/(T/sup 1/2/).
learning (artificial intelligence); statistical analysis; uncertainty handling; random processes; upper confidence bounds; online learning; statistics; exploitation decision; exploration decision; uncertain information; random process; adversarial bandit problem; associative reinforcement learning; linear value functions
P. Auer, "Using upper confidence bounds for online learning", FOCS, 2000, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science 2000, pp. 270, doi:10.1109/SFCS.2000.892116