The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2006 vol.28)
pp: 594-611
Li Fei-Fei , IEEE
Rob Fergus , IEEE
ABSTRACT
Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.
INDEX TERMS
Recognition, object categories, learning, few images, unsupervised, variational inference, priors.
CITATION
Li Fei-Fei, Rob Fergus, Pietro Perona, "One-Shot Learning of Object Categories", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.28, no. 4, pp. 594-611, April 2006, doi:10.1109/TPAMI.2006.79
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool