This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Learning with Hierarchical-Deep Models
Aug. 2013 (vol. 35 no. 8)
pp. 1958-1971
R. Salakhutdinov, Dept. of Stat. & Comput. Sci., Univ. of Toronto, Toronto, ON, Canada
J. B. Tenenbaum, Dept. of Brain & Cognitive Sci., Massachusetts Inst. of Technol., Cambridge, MA, USA
A. Torralba, Comput. Sci. & Artificial Intell. Lab., Massachusetts Inst. of Technol., Cambridge, MA, USA
We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.
Index Terms:
Approximation methods,Machine learning,Stochastic processes,Computational modeling,Vectors,Bayesian methods,Training,one-shot learning,Deep networks,deep Boltzmann machines,hierarchical Bayesian models
Citation:
R. Salakhutdinov, J. B. Tenenbaum, A. Torralba, "Learning with Hierarchical-Deep Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1958-1971, Aug. 2013, doi:10.1109/TPAMI.2012.269
Usage of this product signifies your acceptance of the Terms of Use.