The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.09 - September (2011 vol.33)
pp: 1776-1792
Ruiping Wang , Chinese Academy of Sciences, Beijing
Shiguang Shan , Chinese Academy of Sciences, Beijing
Xilin Chen , Chinese Academy of Sciences, Beijing
Jie Chen , University of Oulu, Oulu
Wen Gao , Peking University, Beijing
ABSTRACT
Over the past few decades, dimensionality reduction has been widely exploited in computer vision and pattern analysis. This paper proposes a simple but effective nonlinear dimensionality reduction algorithm, named Maximal Linear Embedding (MLE). MLE learns a parametric mapping to recover a single global low-dimensional coordinate space and yields an isometric embedding for the manifold. Inspired by geometric intuition, we introduce a reasonable definition of locally linear patch, Maximal Linear Patch (MLP), which seeks to maximize the local neighborhood in which linearity holds. The input data are first decomposed into a collection of local linear models, each depicting an MLP. These local models are then aligned into a global coordinate space, which is achieved by applying MDS to some randomly selected landmarks. The proposed alignment method, called Landmarks-based Global Alignment (LGA), can efficiently produce a closed-form solution with no risk of local optima. It just involves some small-scale eigenvalue problems, while most previous aligning techniques employ time-consuming iterative optimization. Compared with traditional methods such as ISOMAP and LLE, our MLE yields an explicit modeling of the intrinsic variation modes of the observation data. Extensive experiments on both synthetic and real data indicate the effectivity and efficiency of the proposed algorithm.
INDEX TERMS
Dimensionality reduction, manifold learning, maximal linear patch, landmarks-based global alignment.
CITATION
Ruiping Wang, Shiguang Shan, Xilin Chen, Jie Chen, Wen Gao, "Maximal Linear Embedding for Dimensionality Reduction", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.33, no. 9, pp. 1776-1792, September 2011, doi:10.1109/TPAMI.2011.39
REFERENCES
[1] M. Balasubramanian and E.L. Schwartz, "The IsoMap Algorithm and Topological Stability," Science, vol. 295, no. 4, p. 5552, Jan. 2002.
[2] P. Baldi and K. Hornik, "Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima," Neural Networks, vol. 2, pp. 53-58, 1989.
[3] M. Belkin and P. Niyogi, "Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering," Advances in Neural Information Processing Systems, vol. 14, pp. 585-591, 2002.
[4] Y. Bengio, J. Paiement, P. Vincent, O. Delalleau, N. Roux, and M. Ouimet, "Out-of-Sample Extensions for LLE, ISOMAP, MDS, Eigenmaps, and Spectral Clustering," Advances in Neural Information Processing Systems, vol. 16, pp. 2197-2219, 2004.
[5] C.M. Bishop, M. Svensen, and C.K.I. Williams, "GTM: The Generative Topographic Mapping," Neural Computation, vol. 10, pp. 215-234, 1998.
[6] M. Brand, "Charting a Manifold," Advances in Neural Information Processing Systems, vol. 15, pp. 961-968, 2003.
[7] H.-T. Chen, H.-W. Chang, and T.-L. Liu, "Local Discriminant Embedding and Its Variants," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 846-853, 2005.
[8] T.F. Cox and M.A.A. Cox, Multidimensional Scaling. Chapman and Hall, 2001.
[9] D. DeMers and G. Cottrell, "Nonlinear Dimensionality Reduction," Advances in Neural Information Processing Systems, vol. 5, pp. 580-587, 1993.
[10] D.L. Donoho and C. Grimes, "Hessian Eigenmaps: New Locally Linear Embedding Techniques for High-Dimensional Data," Proc. Nat'l Academy of Sciences, vol. 100, no. 10, pp. 5591-5596, May 2003.
[11] R. Duda, P. Hart, and D. Stork, Pattern Classification. Wiley, 2000.
[12] B.S. Everitt, An Introduction to Latent Variable Models. Chapman and Hall, 1984.
[13] K. Fukunaga and D.R. Olsen, "An Algorithm for Finding Intrinsic Dimensionality of Data," IEEE Trans. Computers, vol. 20, no. 2, pp. 176-193, Feb. 1971.
[14] Z. Ghahramani and G.E. Hinton, "The EM Algorithm for Mixtures of Factor Analyzers," Technical Report CRG-TR-96-1, Univ. of Toronto, 1996.
[15] J. Ham, D. Lee, S. Mika, and B. Schölkopf, "A Kernel View of the Dimensionality Reduction of Manifolds," Proc. Int'l Conf. Machine Learning, pp. 47-54, 2004.
[16] T. Hastie and W. Stuetzle, "Principal Curves," J. Am. Statistical Assoc., vol. 84, pp. 502-516, 1989.
[17] X. He, D. Cai, S. Yan, and H.-J. Zhang, "Neighborhood Preserving Embedding," Proc. 10th IEEE Int'l Conf. Computer Vision, vol. 2, pp. 1208-1213, 2005.
[18] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, "Face Recognition Using Laplacianfaces," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328-340, Mar. 2005.
[19] D.R. Hundley and M.J. Kirby, "Estimation of Topological Dimension," Proc. SIAM Int'l Conf. Data Mining, pp. 194-202, 2003.
[20] I.T. Jolliffe, Principal Component Analysis. Springer-Verlag, 1986.
[21] N. Kambhatla and T.K. Leen, "Dimension Reduction by Local Principal Component Analysis," Neural Computation, vol. 9, pp. 1493-1516, 1997.
[22] L. Kaufman and P.J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, 1990.
[23] T. Kohonen, Self-Organizing Maps, third ed. Springer-Verlag, 2001.
[24] E. Kokiopoulou and Y. Saad, "Orthogonal Neighborhood Preserving Projections: A Projection-Based Dimensionality Reduction Technique," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2143-2156, Dec. 2007.
[25] S. Lafon and A.B. Lee, "Diffusion Maps and Coarse-Graining: A Unified Framework for Dimensionality Reduction, Graph Partitioning, and Data Set Parameterization," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 9, pp. 1393-1403, Sept. 2006.
[26] M.C. Law and A.K. Jain, "Incremental Nonlinear Dimensionality Reduction by Manifold Learning," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 377-391, Mar. 2006.
[27] E. Levina and P.J. Bickel, "Maximum Likelihood Estimation of Intrinsic Dimension," Advances in Neural Information Processing Systems, vol. 17, pp. 777-784, 2005.
[28] R.S. Lin, C.B. Liu, M.H. Yang, N. Ahuja, and S. Levinson, "Learning Nonlinear Manifolds from Time Series," Proc. Ninth European Conf. Computer Vision, vol. 2, pp. 245-256, May 2006.
[29] T. Lin and H. Zha, "Riemannian Manifold Learning," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 5, pp. 796-809, May 2008.
[30] L. Maaten, E. Postma, and J. Herik, "Dimensionality Reduction: A Comparative Review," Technical Report TiCC-TR 2009-005, Tilburg Univ., 2009.
[31] K. Müller, S. Mika, G. Rätsch, K. Tsuda, and B. Schölkopf, "An Introduction to Kernel-Based Learning Algorithms," IEEE Trans. Neural Networks, vol. 12, no. 2, pp. 181-201, Mar. 2001.
[32] J. Park, Z. Zhang, H. Zha, and R. Kasturi, "Local Smoothing for Manifold Learning," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 452-459, 2004.
[33] K. Pettis, T. Bailey, A.K. Jain, and R. Dubes, "An Intrinsic Dimensionality Estimator from Near-Neighbor Information," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 1, no. 1, pp. 25-36, Jan. 1979.
[34] S.T. Roweis and L.K. Saul, "Nonlinear Dimensionality Reduction by Locally Linear Embedding," Science, vol. 290, no. 22, pp. 2323-2326, Dec. 2000.
[35] S.T. Roweis, L.K. Saul, and G.E. Hinton, "Global Coordination of Local Linear Models," Advances in Neural Information Processing Systems, vol. 14, pp. 889-896, 2002.
[36] J.W. Sammon, "A Nonlinear Mapping for Data Structure Analysis," IEEE Trans. Computers, vol. 18, no. 5, pp. 401-409, May 1969.
[37] L.K. Saul and S.T. Roweis, "Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds," J. Machine Learning Research, vol. 4, pp. 119-155, 2003.
[38] B. Schölkopf, A.J. Smola, and K.-R. Müller, "Nonlinear Component Analysis as a Kernel Eigenvalue Problem," Neural Computation, vol. 10, pp. 1299-1319, 1998.
[39] H.S. Seung and D.D. Lee, "The Manifold Ways of Perception," Science, vol. 290, pp. 2268-2269, Dec. 2000.
[40] F. Sha and L.K. Saul, "Analysis and Extension of Spectral Methods for Nonlinear Dimensionality Reduction," Proc. 22nd Int'l Conf. Machine Learning, pp. 785-792, 2005.
[41] Y.W. Teh and S.T. Roweis, "Automatic Alignment of Hidden Representations," Advances in Neural Information Processing Systems, vol. 15, pp. 841-848, 2003.
[42] J. Tenenbaum, V. Silva, and J. Langford, "A Global Geometric Framework for Nonlinear Dimensionality Reduction," Science, vol. 290, no. 22, pp. 2319-2323, Dec. 2000.
[43] R. Tibshirani, "Principal Curves Revisited," Statistics and Computing, vol. 2, pp. 183-190, 1992.
[44] J. Verbeek, N. Vlassis, and B. Kröse, "Coordinating Principal Component Analyzers," Proc. Int'l Conf. Artificial Neural Networks, vol. 12, pp. 914-919, 2002.
[45] J. Verbeek, "Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1236-1250, Aug. 2006.
[46] R. Wang, S. Shan, X. Chen, and W. Gao, "Manifold-Manifold Distance with Application to Face Recognition Based on Image Set," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp. 2940-2947, 2008.
[47] R. Wang and X. Chen, "Manifold Discriminant Analysis," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp. 429-436, 2009.
[48] J. Wang, Z. Zhang, and H. Zha, "Adaptive Manifold Learning," Advances in Neural Information Processing Systems, vol. 17, pp. 1473-1480, 2005.
[49] K.Q. Weinberger and L.K. Saul, "Unsupervised Learning of Image Manifolds by Semidefinite Programming," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 988-995, 2004.
[50] G. Wen, L. Jiang, and N.R. Shadbolt, "Using Graph Algebra to Optimize Neighborhood for Isometric Mapping," Proc. 20th Int'l Joint Conf. Artificial Intelligence, pp. 2398-2403, 2007.
[51] S. Yan, D. Xu, B. Zhang, H. Zhang, Q. Yang, and S. Lin, "Graph Embedding and Extension: A General Framework for Dimensionality Reduction," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 40-51, Jan. 2007.
[52] J. Yang, D. Zhang, J.Y. Yang, and B. Niu, "Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 650-664, Apr. 2007.
[53] L. Yang, "Alignment of Overlapping Locally Scaled Patches for Multidimensional Scaling and Dimensionality Reduction," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 438-450, Mar. 2008.
[54] Z. Zhang and H. Zha, "Principal Manifolds and Nonlinear Dimen-sion Reduction via Local Tangent Space Alignment," SIAM J. Scientific Computing, vol. 26, no. 1, pp. 313-338, 2004.
[55] D. Zhao and L. Yang, "Incremental Isometric Embedding of High-Dimensional Data Using Connected Neighborhood Graphs," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 86-98, Jan. 2009.
[56] J. Venna and S. Kaski, "Visualizing Gene Interaction Graphs with Local Multidimensional Scaling," Proc. 14th European Symp. Artificial Neural Networks, pp. 557-562, 2006.
[57] V. de Silva and J.B. Tenenbaum, "Global versus Local Methods in Nonlinear Dimensionality Reduction," Advances in Neural Information Processing Systems, vol. 15, pp. 705-712, 2003.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool