
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Jakob Verbeek, "Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 12361250, August, 2006.  
BibTex  x  
@article{ 10.1109/TPAMI.2006.166, author = {Jakob Verbeek}, title = {Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {28}, number = {8}, issn = {01628828}, year = {2006}, pages = {12361250}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2006.166}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models IS  8 SN  01628828 SP1236 EP1250 EPD  12361250 A1  Jakob Verbeek, PY  2006 KW  Feature extraction or construction KW  machine learning KW  statistical image representation. VL  28 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] A. Leonardis and H. Bischof, “Kernel and Subspace Methods for Computer Vision,” Pattern Recognition, vol. 36, no. 9, pp. 19251927, 2003.
[2] S.T. Roweis, L.K. Saul, and G.E. Hinton, “Global Coordination of Loca Linear Models,” Advances in Neural Information Processing Systems, vol. 14, pp. 889896, 2002.
[3] L. Sirovich and M. Kirby, “LowDimensional Procedure for the Characterization of Human Faces,” J. Optical Soc. Am. A, vol. 4, no. 3, pp. 519524, 1987.
[4] I.T. Jolliffe, Principal Component Analysis. New York: SpingerVerlag, 1986.
[5] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 7186, 1991.
[6] H. Murase and S.K. Nayar, “Visual Learning and Recognition of 3D Objects from Appearance,” Int'l J. Computer Vision, vol. 14, pp. 524, 1995.
[7] E. Oja, “Data Compression, Feature Extraction, and Autoassociation in Feedforward Neural Networks,” Proc. Int'l Conf. Artificial Neural Networks, pp. 737745, 1991.
[8] T. Hastie and W. Stuetzle, “Principal Curves,” J. Am. Statistical Assoc., vol. 84, no. 406, pp. 502516, 1989.
[9] B. Kégl, A. Krzyzak, T. Linder, and K. Zeger, “Learning and Design of Principal Curves,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 3, pp. 281297, Mar. 2000.
[10] K. Chang and J. Ghosh, “A Unified Model for Probabilistic Principal Surfaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 1, pp. 2241, Jan. 2001.
[11] T. Kohonen, SelfOrganizing Maps. New York: SpingerVerlag, 1995.
[12] J.J. Verbeek, N. Vlassis, and B.J.A. Kröse, “SelfOrganizing Mixture Models,” Neurocomputing, vol. 63, pp. 99123, 2005.
[13] C.M. Bishop, M. Svensén, and C.K.I. Williams, “GTM: The Generative Topographic Mapping,” Neural Computation, vol. 10, pp. 215234, 1998.
[14] J.J. Verbeek, “Mixture Models for Clustering and Dimension Reduction,” PhD dissertation, Univ. of Amsterdam, 2004.
[15] I.K. Fodor, “A Survey of Dimension Reduction Techniques,” Technical Report UCRLID148494, Lawrence Livermore Nat'l Laboratory, Center for Applied Scientific Computing, 2002.
[16] M.Á. CarreiraPerpiñán, “A Review of Dimension Reduction Techniques,” Technical Report CS9609, Dept. of Computer Science, Univ. of Sheffield, 1997.
[17] J.B. Tenenbaum, V. de Silva, and J.C. Langford, “A Global Geometric Framework for NonlinearDimensionality Reduction,” Science, vol. 290, no. 5500, pp. 23192323, 2000.
[18] S.T. Roweis and L.K. Saul, “NonlinearDimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, no. 5500, pp. 23232326, 2000.
[19] B. Schölkopf, A.J. Smola, and K.R. Müller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computation, vol. 10, pp. 12991319, 1998.
[20] M. Brand, “Charting a Manifold,” Advances in Neural Information Processing Systems, vol. 15, pp. 961968, 2003.
[21] X. He and P. Niyogi, “Locality Preserving Projections,” Advances in Neural Information Processing Systems, vol. 16, pp. 153160, 2004.
[22] X. He, S. Yan, Y. Hu, P. Niyogi, and H.J. Zhang, “Face Recognition Using Laplacianfaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328340, Mar. 2005.
[23] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Advances in Neural Information Processing Systems, vol. 14, pp. 585591, 2002.
[24] K.Q. Weinberger and L.K. Saul, “Unsupervised Learning of Image Manifolds by Semidefinite Programming,” Proc. Int'l Conf. Computer Vision and Pattern Recognition, pp. 988995, 2004.
[25] Y. Bengio, J.F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet, “OutofSample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering,” Advances in Neural Information Processing Systems, vol. 16, pp. 177184, 2004.
[26] N. Kambhatla and T.K. Leen, “Fast Nonlinear Dimension Reduction,” Advances in Neural Information Processing Systems, vol. 6, pp. 152159, 1994.
[27] C. Bregler and S.M. Omohundro, “Nonlinear Image Interpolation Using Manifold Learning,” Advances in Neural Information Processing Systems, vol. 7, pp. 973980, 1995.
[28] M.E. Tipping and C.M. Bishop, “Mixtures of Probabilistic Principal Component Analysers,” Neural Computation, vol. 11, no. 2, pp. 443482, 1999.
[29] G.E. Hinton, P. Dayan, and M. Revow, “Modeling the Manifolds of Images of Handwritten Digits,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp. 6574, 1997.
[30] R. Bellman, Adaptive Control Processes: A Guided Tour. Princeton Univ. Press, 1961.
[31] J.H. Ham, D.D. Lee, and L.K. Saul, “Learning HighDimensional Correspondences from LowDimensional Manifolds,” Proc. Workshop the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, 2003.
[32] X. Meng and D. van Dyk, “The EM Algorithm— An Old Folk Song Sung to a Fast New Tune,” J. Royal Statistical Soc., Series B (Methodological), vol. 59, no. 1, pp. 511567, 1997.
[33] D. de Ridder and V. Franc, “Robust Subspace Mixture Models Using tDistributions,” Proc. British Machine Vision Conf., pp. 319328, 2003.
[34] A.P. Dempster, N.M. Laird, and D.B. Rubin, “Maximum Likelihood from Incomplete Data via the EM Algorithm,” J. Royal Statistical Soc., Series B (Methodological), vol. 39, no. 1, pp. 138, 1977.
[35] Z. Ghahramani and G.E. Hinton, “The EM Algorithm for Mixtures of Factor Analyzers,” Technical Report CRGTR961, Univ. of Toronto, 1996.
[36] Y.W. Teh and S.T. Roweis, “Automatic Alignment of Local Representations,” Advances in Neural Information Processing Systems, vol. 15, pp. 841848, 2003.
[37] J.J. Verbeek, S.T. Roweis, and N. Vlassis, “Nonlinear CCA and PCA by Alignment of Local Models,” Advances in Neural Information Processing Systems, vol. 16, pp. 297304, 2004.
[38] J. Wieghardt, “Learning the Topology of Views: From Images to Objects,” PhD dissertation, RuhrUniv.Bochum, Bochum, Germany, 2001.
[39] L.K. Saul and S.T. Roweis, “Think Globally, Fit Locally: Unsupervised Learning of LowDimensional Manifolds,” J. Machine Learning Research, vol. 4, pp. 119155, 2003.
[40] S. Chretien and A.O. Hero, “Kullback Proximal Algorithms for Maximum Likelihood Estimation,” IEEE Trans. Information Theory, vol. 46, no. 5, pp. 18001810, 2000.
[41] R. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton, “Adaptive Mixtures of Local Experts,” Neural Computation, vol. 3, pp. 7987, 1991.
[42] J.J. Verbeek, N. Vlassis, and B.J.A. Kröse, “Coordinating Principal Component Analyzers,” Proc. Int'l Conf. Artificial Neural Networks, vol. 12, pp. 914919, 2002.
[43] B. Kégl, “Intrinsic Dimension Estimation Using Packing Numbers,” Advances in Neural Information Processing Systems, vol. 15, pp. 681688, 2003.
[44] J.A. Costa and A.O. Hero, “Geodesic Entropic Graphs for Dimension and Entropy Estimation in Manifold Learning,” IEEE Trans. Signal Processing, vol. 52, no. 8, pp. 22102221, 2004.
[45] E. Levina and P.J. Bickel, “Maximum Likelihood Estimation of Intrinsic Dimension,” Advances in Neural Information Processing Systems, vol. 17, pp. 777784, 2005.
[46] J.J. Oliver, R.A. Baxter, and C.S. Wallace, “Unsupervised Learning Using MML,” Proc. Int'l Conf. Machine Learning, vol. 13, pp. 364374, 1996.
[47] M.A.T. Figueiredo and A.K. Jain, “Unsupervised Learning of Finite Mixture Models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 381396, Mar. 2002.
[48] Z. Ghahramani and M.J. Beal, “Variational Inference for Bayesian Mixtures of Factor Analysers,” Advances in Neural Information Processing Systems, vol. 12, pp. 449455, 2000.
[49] J.H. Ham, D.D. Lee, and L.K. Saul, “Semisupervised Alignment of Manifolds,” Proc. Ann. Conf. Uncertainty in Artificial Intelligence, vol. 10, pp. 120127, 2005.
[50] H. Ritter, “Parametrized SelfOrganizing Maps,” Proc. Int'l Conf. Artificial Neural Networks, vol. 3, pp. 568577, 1993.
[51] Z. Ghahramani and M.I. Jordan, “Supervised Learning from Incomplete Data via an EM Approach,” Advances in Neural Information Processing Systems, vol. 6, pp. 120127, 1994.
[52] H. Karger, “Riemann Center of Mass and Mollifier Smoothing,” Comm. Pure and Applied Math., vol. 3, pp. 509541, 1977.
[53] A.R. Webb, Statistical Pattern Recognition. New York: Wiley, 2002.
[54] G. Peters, B. Zitova, and C. von der Malsburg, “How to Measure the Pose Robustness of Object Views,” Image and Vision Computing, vol. 20, no. 4, pp. 249256, 2002.