The Community for Technology Leaders
RSS Icon
Issue No.09 - September (2009 vol.21)
pp: 1299-1313
Tianhao Zhang , Shanghai Jiao Tong University, Shanghai
Dacheng Tao , The Hong Kong Polytechnic University, Kowloon
Xuelong Li , Birkbeck College, University of London, London
Jie Yang , Shanghai Jiao Tong University, Shanghai
Spectral analysis-based dimensionality reduction algorithms are important and have been popularly applied in data mining and computer vision applications. To date many algorithms have been developed, e.g., principal component analysis, locally linear embedding, Laplacian eigenmaps, and local tangent space alignment. All of these algorithms have been designed intuitively and pragmatically, i.e., on the basis of the experience and knowledge of experts for their own purposes. Therefore, it will be more informative to provide a systematic framework for understanding the common properties and intrinsic difference in different algorithms. In this paper, we propose such a framework, named "patch alignment,” which consists of two stages: part optimization and whole alignment. The framework reveals that 1) algorithms are intrinsically different in the patch optimization stage and 2) all algorithms share an almost identical whole alignment stage. As an application of this framework, we develop a new dimensionality reduction algorithm, termed Discriminative Locality Alignment (DLA), by imposing discriminative information in the part optimization stage. DLA can 1) attack the distribution nonlinearity of measurements; 2) preserve the discriminative ability; and 3) avoid the small-sample-size problem. Thorough empirical studies demonstrate the effectiveness of DLA compared with representative dimensionality reduction algorithms.
Dimensionality reduction, spectral analysis, patch alignment, discriminative locality alignment.
Tianhao Zhang, Dacheng Tao, Xuelong Li, Jie Yang, "Patch Alignment for Dimensionality Reduction", IEEE Transactions on Knowledge & Data Engineering, vol.21, no. 9, pp. 1299-1313, September 2009, doi:10.1109/TKDE.2008.212
[1] Y. Aslandogan and C. Yu, “Techniques and Systems for Image and Video Retrieval,” IEEE Trans. Knowledge and Data Eng., vol. 11, no. 1, pp. 56-63, Jan./Feb. 1999.
[2] P. Belhumeour, J. Hespanha, and D. Kriegman, “Eigenfaces versus Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[3] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Advances in Neural Information Processing System. vol. 14, pp. 585-591, MIT Press, 2002.
[4] R. Bellman, Adaptive Control Processes: A Guided Tour. Princeton Univ. Press, 1961.
[5] Y. Bengio, J. Paiement, P. Vincent, O. Dellallaeu, L. Roux, and M. Quimet, “Out-of Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering,” Advances in Neural Information Processing System, vol. 16, pp. 177-184, MIT Press, 2004.
[6] C.M. Bishop, M. Svensén, and C.K.I. Williams, “GTM: The Generative Topographic Mapping,” Neural Computation, vol. 10, no. 1, pp. 215-234, 1998.
[7] C.M. Bishop, M. Svensén, and C.K.I. Williams, “Developments of the Generative Topographic Mapping,” Neurocomputing, vol. 21, pp. 203-224, 1998.
[8] D. Cai, X. He, and J. Han, “Document Clustering Using Locality Preserving Indexing,” IEEE Trans. Knowledge and Data Eng., vol. 17, no. 12, pp. 1624-1637, Dec. 2005.
[9] D. Cai, X. He, and J. Han, “SRDA: An Efficient Algorithm for Large Scale Discriminant Analysis,” IEEE Trans. Knowledge and Data Eng., vol. 20, no. 1, pp. 1-12, Jan. 2008.
[10] D. Cai, X. He, and J. Han, “Using Graph Model for Face Analysis,” Technical Report No. 2636, Dept. of Computer Science, Univ. of Illinois at Urbana-Champaign, Sept. 2005.
[11] D.L. Donoho and C. Grimes, “Hessian Eigenmaps: New Locally Linear Embedding Techniques for High-dimensional Data,” Proc. Nat'l Academy of Sciences USA, vol. 100, no. 10, pp. 5591-5596, 2003.
[12] R. Duda, P. Hart, and D. Stork, Pattern Classification, second ed. Wiley, 2000.
[13] R.A. Fisher, “The Use of Multiple Measurements in Taxonomic Problems,” Annals of Eugenics, vol. 7, pp. 179-188, 1936.
[14] B. Gao, T. Liu, G. Feng, T. Qin, Q. Cheng, and W. Ma, “Hierarchical Taxonomy Preparation for Text Categorization Using Consistent Bipartite Spectral Graph Copartitioning,” IEEE Trans. Knowledge and Data Eng., vol. 17, no. 9, pp. 1263-1273, Sept. 2005.
[15] D.B. Graham and N.M. Allinson, “Characterizing Virtual Eigensignatures for General Purpose Face Recognition,” Face Recognition: From Theory to Applications, H. Wechsler, P.J. Pillips, V. Bruce, F. Fogelman-Soulie and T.S. Huang, eds., pp. 446-456, Springer, 1998.
[16] X. He, D. Cai, and J. Han, “Learning a Maximum Margin Subspace for Image Retrieval,” IEEE Trans. Knowledge and Data Eng., vol. 20, no. 2, pp. 189-201, Feb. 2008.
[17] X. He, D. Cai, J. Wen, W. Ma, and H. Zhang, “Clustering and Searching WWW Images Using Link and Page Layout Analysis,” ACM Trans. Multimedia Computing, Comm., and Applications, vol. 3, no. 2,Article 10, 2007.
[18] X. He, D. Cai, S. Yan, and H. Zhang, “Neighborhood Preserving Embedding,” Proc. IEEE Int'l Conf. Computer Vision, pp. 1208-1213, 2005.
[19] X. He and P. Niyogi, “Locality Preserving Projections,” Advances in Neural Information Processing System, vol. 16, pp. 153-160, MIT Press, 2004.
[20] H. Hotelling, “Analysis of a Complex of Statistical Variables into Principal Components,” J. Educational Psychology, vol. 24, pp. 417-441, 1933.
[21] I.T. Jolliffe, Principal Component Analysis, second ed. Springer-Verlag, 2002.
[22] E. Kokiopoulou and Y. Saad, “Orthogonal Neighborhood Preserving Projections: A Projection-Based Dimensionality Reduction Technique,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2143-2156, Dec. 2007.
[23] Y. Koren and L. Carmel, “Robust Linear Dimensionality Reduction,” IEEE Trans. Visualization and Computer Graphics, vol. 10, no. 4, pp. 459-470, July/Aug. 2004.
[24] P.J. Phillips, H. Moon, S.A. Rizvi, and P.J. Rauss, “The FERET Evaluation Methodology for Face-Recognition Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp.1090-1104, Oct. 2000.
[25] S. Rosenberg, The Laplacian on a Riemannian Manifold. Cambridge Univ. Press, 1997.
[26] S.T. Roweis and L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, pp. 2323-2326, 2000.
[27] L.K. Saul and S.T. Roweis, “Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold,” J.Machine Learning Research, vol. 4, pp. 119-155, 2003.
[28] L.K. Saul, K.Q. Weinberger, J.H. Ham, F. Sha, and D.D. Lee, “Spectral Methods for Dimensionality Reduction,” Semisupervised Learning, O. Chapelle, B. Schoelkopf, and A. Zien, eds., MIT Press, 2006.
[29] G. Shakhnarovich and B. Moghaddam, “Face Recognition in Subspaces,” Handbook of Face Recognition, S.Z. Li and A.K. Jain, eds., Springer-Verlag, 2004.
[30] D. Tao, X. Li, and S.J. Maybank, “Negative Samples Analysis in Relevance Feedback,” IEEE Trans. Knowledge and Data Eng., vol. 19, no. 4, pp. 568-580, Apr. 2007.
[31] D. Tao, X. Li, X. Wu, and S.J. Maybank, “General Tensor Discriminant Analysis and Gabor Features for Gait Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1700-1715, Oct. 2007.
[32] J. Tenenbaum, V. Silva, and J. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290, pp. 2319-2323, 2000.
[33] M.E. Tipping and C.M. Bishop, “Probabilistic Principal Component Analysis,” J. Royal Statistical Soc. B, vol. 21, no. 3, pp. 611-622, 1999.
[34] M. Turk and A. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp.586-591, 1991.
[35] D. Xu, S. Lin, S. Yan, and X. Tang, “Rank-One Projections with Adaptive Margin for Face Recognition,” IEEE Trans. Systems, Man, and Cybernetics, Part B, vol. 37, no. 5, pp. 1226-1236, Oct. 2007.
[36] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H. Zhang, “Multilinear Discriminant Analysis for Face Recognition,” IEEE Trans. Image Processing, vol. 16, no. 1, pp. 212-220, Jan. 2007.
[37] S. Yan, D. Xu, B. Zhang, H. Zhang, Q. Yang, and S. Lin, “Graph Embedding and Extensions: A General Framework for Dimensionality Reduction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 40-51, Jan. 2007.
[38] T. Zhang, J. Yang, D. Zhao, and X. Ge, “Linear Local Tangent Space Alignment and Application to Face Recognition,” Neurocomputing, vol. 70, pp. 1547-1553, 2007.
[39] Z. Zhang and H. Zha, “Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment,” SIAM J. Scientific Computing, vol. 26, no. 1, pp. 313-338, 2005.
[40] S. Zhou, “Probabilistic Analysis of Kernel Principal Components: Mixture Modeling and Classification,” Technical Report CAR-TR-993, Center for Automation Research (CfAR), 2003.
53 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool