The Community for Technology Leaders
RSS Icon
Issue No.08 - August (2008 vol.30)
pp: 1490-1495
Yi Guo , University of New England, Armidale
Junbin Gao , Charles Sturt Universtiy, Bathurst
Paul W. Kwan , University of New England, Armidale
In most existing dimensionality reduction algorithms, the main objective is to preserve relational structure among objects of the input space in a low dimensional embedding space. This is achieved by minimizing the inconsistency between two similarity/dissimilarity measures, one for the input data and the other for the embedded data, via a separate matching objective function. Based on this idea, a new dimensionality reduction method called Twin Kernel Embedding (TKE) is proposed. TKE addresses the problem of visualizing non-vectorial data that is difficult for conventional methods in practice due to the lack of efficient vectorial representation. TKE solves this problem by minimizing the inconsistency between the similarity measures captured respectively by their kernel Gram matrices in the two spaces. In the implementation, by optimizing a nonlinear objective function using the gradient descent algorithm, a local minimum can be reached. The results obtained include both the optimal similarity preserving embedding and the appropriate values for the hyperparameters of the kernel. Experimental evaluation on real non-vectorial datasets confirmed the effectiveness of TKE. TKE can be applied to other types of data beyond those mentioned in this paper whenever suitable measures of similarity/dissimilarity can be defined on the input data.
Machine learning, Clustering, Visualization
Yi Guo, Junbin Gao, Paul W. Kwan, "Twin Kernel Embedding", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 8, pp. 1490-1495, August 2008, doi:10.1109/TPAMI.2008.74
[1] M. Belkin and P. Niyogi, “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation,” Neural Computation, vol. 15, no. 6, pp. 1373-1396, 2003.
[2] T.F. Cox and M.A.A. Cox, “Multidimensional Scaling,” Monographs on Statistics and Applied Probability, second ed., Chapman and Hall/CRC, 2001.
[3] M. Cuturi, K. Fukumizu, and J.-P. Vert, “Semigroup Kernels on Measures,” J. Machine Learning Research, vol. 6, pp. 1169-1198, 7, 2005.
[4] T. Gärtner, “A Survey of Kernels for Structured Data,” ACM SIGKDD Explorations Newsletter, vol. 5, no. 1, pp. 49-58, 2003.
[5] A. Globerson, G. Chechik, F. Pereira, and N. Tishby, “Euclidean Embedding of Co-Occurrence Data,” Advances in Neural Information Processing Systems 17, L.K. Saul, Y. Weiss, and L. Bottou, eds., pp. 497-504, MIT Press, 2005.
[6] Y. Guo and J. Gao, “An Integration of Shape Context and Semigroup Kernel in Image Classification,” Proc. Sixth Int'l Conf. Machine Learning and Cybernetics, pp. 181-186, 2007.
[7] Y. Guo, J. Gao, and P.W. Kwan, “Kernel Laplacian Eigenmaps for Visualization of Non-Vectorial Data,” Lecture Notes in Artificial Intelligence, vol. 4304, pp. 1179-1183, 2006.
[8] Y. Guo, J. Gao, and P.W. Kwan, “Visualization of Non-Vectorial Data Using Twin Kernel Embedding,” Proc. First Int'l Workshop Integrating AI and Data Mining, pp. 11-17, 2006.
[9] Y. Guo, J. Gao, and P.W. Kwan, “Twin Kernel Embedding with Back Constraints,” Proc. IEEE Int'l Conf. Data Mining, pp. 319-324, 2007.
[10] M. Jolliffe, Principal Component Analysis. Springer-Verlag, 1986.
[11] N. Lawrence, “Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variable Models,” J. Machine Learning Research, vol. 6, pp. 1783-1816, 2005.
[12] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C.J.C.H. Watkins, “Text Classification Using String Kernels,” J. Machine Learning Research, vol. 2, pp. 419-444, 2002.
[13] M.F. Møller, “A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning,” Neural Networks, vol. 6, no. 4, pp. 525-533, 1993.
[14] M. Popescu, J.M. Keller, and J.A. Mitchell, “Fuzzy Measures on the Gene Ontology for Gene Product Similarity,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 3, no. 3, pp. 263-274, July-Sept. 2006.
[15] S.T. Roweis and L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, no. 22, pp. 2323-2326, Dec. 2000.
[16] G. Salton, A. Wong, and C.S. Yang, “A Vector Space Model for Automatic Indexing,” Information Retrieval and Language Processing, vol. 18, no. 11, pp.613-620, Nov. 1975.
[17] B. Schölkopf and A.J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press, 2002.
[18] B. Schölkopf, A.J. Smola, and K.-R. Müller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computation, vol. 10, pp. 1299-1319, 1998.
[19] B. Schölkopf, K. Tsudo, and J.-P. Vert, “Kernel Methods in Computational Biology,” Computational Molecular Biology, The MIT Press, 2004.
[20] J.B. Tenenbaum, V. de Silva, and J.C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290, no. 22, pp. 2319-2323, Dec. 2000.
[21] K.Q. Weinberger, F. Sha, and L.K. Saul, “Learning a Kernel Matrix for Nonlinear Dimensionality Reduction,” Proc. 21st Int'l Conf. in Machine Learning, pp. 106-113, 2004.
[22] L. Wolf and A. Shashua, “Learning over Sets Using Kernel Principal Angles,” J. Machine Learning Research, vol. 4, no. 10, pp. 913-931, 2003.
32 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool