This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On Utilizing Search Methods to Select Subspace Dimensions for Kernel-Based Nonlinear Subspace Classifiers
January 2005 (vol. 27 no. 1)
pp. 136-141
In Kernel-based Nonlinear Subspace (KNS) methods, the subspace dimensions have a strong influence on the performance of the subspace classifier. In order to get a high classification accuracy, a large dimension is generally required. However, if the chosen subspace dimension is too large, it leads to a low performance due to the overlapping of the resultant subspaces and, if it is too small, it increases the classification error due to the poor resulting approximation. The most common approach is of an ad hoc nature, which selects the dimensions based on the so-called cumulative proportion [13] computed from the kernel matrix for each class. In this paper, we propose a new method of systematically and efficiently selecting optimal or near-optimal subspace dimensions for KNS classifiers using a search strategy and a heuristic function termed the Overlapping criterion. The rationale for this function has been motivated in the body of the paper. The task of selecting optimal subspace dimensions is reduced to finding the best ones from a given problem-domain solution space using this criterion as a heuristic function. Thus, the search space can be pruned to very efficiently find the best solution. Our experimental results demonstrate that the proposed mechanism selects the dimensions efficiently without sacrificing the classification accuracy.

[1] D. Achlioptas and F. McSherry, “Fast Computation of Low-Rank Matrix Approximations,” Proc. 33rd Ann. ACM Symp. Theory of Computing (STOC '01), pp. 611-618, 2001.
[2] D. Achlioptas, F. McSherry, and B. Schölkopf, “Sampling Techniques for Kernel Methods,” Advances in Neural Information Processing Systems 14, pp. 335-342, 2002.
[3] C.L. Blake and C.J. Merz, “UCI Repository of Machine Learning Databases,” Dept. of Information and Computer Science, Univ. of California, Irvine, http://www.ics.uci.edu/mlearnMLRepository.html , 1998.
[4] K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed. San Diego, Calif.: Academic Press, 1990.
[5] L.N. Kanal, “Problem-Solving Models and Search Strategies for Pattern Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 1, no. 2, pp. 193-201, 1979.
[6] S.-W. Kim and B.J. Oommen, “On Using Prototype Reduction Schemes to Optimize Kernel-Based Nonlinear Subspace Methods,” Pattern Recognition, vol. 37, no. 2, pp. 227-239, 2004.
[7] S.-W. Kim and B.J. Oommen, “On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods,” submitted for publication.
[8] J. Laaksonen and E. Oja, “Subspace Dimension Selection and Averaged Learning Subspace Method in Handwritten Digit Classification,” Proc. Int'l Conf. Artificial Neural Networks (ICANN '96), pp. 227-232, 1996.
[9] E. Maeda and H. Murase, “Multi-Category Classification by Kernel Based Nonlinear Subspace Method,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '99), 1999.
[10] K. Maeda and S. Watanabe, “A Pattern Matching Method with Local Structure (in Japanese),” IEICE Trans. Information & Systems, vol. J68-D, no. 3, pp. 345-352, Mar. 1985.
[11] K.R. Müller, S. Mika, G. Ratsch, K. Tsuda, and B. Schölkopf, “An Introduction to Kernel-Based Learning Algorithm,” IEEE Trans. Neural Networks, vol. 12, no. 2, pp. 181-201, Mar. 2001.
[12] N.J. Nilsson, Problem-Solving Methods in Artificial Intelligence. McGraw-Hill, 1971.
[13] E. Oja, Subspace Methods of Pattern Recognition. Research Studies Press, 1983.
[14] B.J. Oommen and L. Rueda, “A Formal Analysis of Why Heuristic Functions Work,” The Artificial Intelligence J., to appear.
[15] E. Rich and K. Knight, Artificial Intelligence, second ed. McGraw-Hill, 1991.
[16] B. Schölkopf, S. Mika, C.J.C. Burges, P. Knirsch, K.R. Müller, G. Ratsch, K. Tsuda, and A.J. Smola, “Input Space Versus Feature Space in Kernel-Based Methods,” IEEE Trans. Neural Networks, vol. 10, no. 5, pp. 1000-1016, Sept. 1999.
[17] B. Schölkopf, A.J. Smola, and K.R. Müller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computation, vol. 10, no. 5, pp. 1299-1319, 1998.
[18] A.J. Smola and B. Schölkopf, “Sparse Greedy Matrix Approximation for Machine Learning,” Proc. Int'l Conf. Machine Learning (ICML '00), pp. 911-918, 2000.
[19] M. Tipping, “Sparse Kernel Principal Component Analysis,” Advances in Neural Information Processing Systems 13, Cambridge, Mass.: MIT Press, 2001.
[20] K. Tsuda, “Subspace Method in the Hilbert Space (in Japanese),” IEICE Trans. Information & Systems, vol. J82-D-II, no. 4, pp. 592-599, Apr. 1999.
[21] C. Williams and M. Seeger, “Using the Nystrom Method to Speed Up Kernel Machines,” Advances in Neural Information Processing Systems, vol. 13, 2001.

Index Terms:
Kernel principal component analysis (kPCA), kernel-based nonlinear subspace (KNS) classifier, subspace dimension selections, state-space search algorithms.
Citation:
Sang-Woon Kim, B. John Oommen, "On Utilizing Search Methods to Select Subspace Dimensions for Kernel-Based Nonlinear Subspace Classifiers," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, pp. 136-141, Jan. 2005, doi:10.1109/TPAMI.2005.15
Usage of this product signifies your acceptance of the Terms of Use.