• Publication
  • 2005
  • Issue No. 3 - March
  • Abstract - On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods
 This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods
March 2005 (vol. 27 no. 3)
pp. 455-460
In Kernel-based Nonlinear Subspace (KNS) methods, the length of the projections onto the principal component directions in the feature space, is computed using a kernel matrix, K, whose dimension is equivalent to the number of sample data points. Clearly this is problematic, especially, for large data sets. In this paper, we solve this problem by subdividing the data into smaller subsets, and utilizing a Prototype Reduction Scheme (PRS) as a preprocessing module, to yield more refined representative prototypes. Thereafter, a Classifier Fusion Strategy (CFS) is invoked as a postprocessing module, to combine the individual KNS classification results to derive a consensus decision. Essentially, the PRS is used to yield computational advantage, and the CFS, in turn, is used to compensate for the decreased efficiency caused by the data set division. Our experimental results demonstrate that the proposed mechanism significantly reduces the prototype extraction time as well as the computation time without sacrificing the classification accuracy. The results especially demonstrate a significant computational advantage for large data sets within a parallel processing philosophy.

[1] D. Achlioptas and F. McSherry, “Fast Computation of Low-Rank Approximations,” Proc. 33rd Annual ACM Symp. Theory of Computing, pp. 611-618, 2001.
[2] D. Achlioptas, F. McSherry, and B. Schölkopf, “Sampling Techniques for Kernel Methods,” Proc. Conf. Advances in Neural Information Processing Systems 14, pp. 335-342, 2002.
[3] J.C. Bezdek and L.I. Kuncheva, “Nearest Prototype Classifier Designs: An Experimental Study,” Int'l J. Intelligent Systems, vol. 16, no. 12, pp. 1445-11473, 2001.
[4] B.V. Dasarathy, Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. Los Alamitos, Calif.: IEEE CS Press, 1991.
[5] P.E. Hart, “The Condensed Nearest Neighbor Rule,” IEEE Trans. Information Theory, vol. IT-14, pp. 515-516, May 1968.
[6] S.-W. Kim and B.J. Oommen, “Enhancing Prototype Reduction Schemes with LVQ3-Type Algorithms,” Pattern Recognition, vol. 36, no. 5, pp. 1083-1093, 2003.
[7] S.-W. Kim and B.J. Oommen, “A Brief Taxonomy and Ranking of Creative Prototype Reduction Schemes,” Pattern Analysis and Applications, vol. 6, no. 3, pp. 232-244, 2003.
[8] S.-W. Kim and B.J. Oommen, “Enhancing Prototype Reduction Schemes with Recursion: A Method Applicable for “Large” Data Sets,” IEEE Trans. Systems, Man, and Cybernetics-Part B, vol. 34, no. 3, pp. 1384-1397, June 2004.
[9] S.-W. Kim and B.J. Oommen, “On Using Prototype Reduction Schemes to Optimize Kernel-Based Nonlinear Subspace Methods,” Pattern Recognition, vol. 37, no. 2, pp. 227-239, 2004.
[10] J. Kittler, M. Hatef, R.P.W. Duin, and J. Matas, “On Combining Classifiers,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226-239, Mar. 1998.
[11] L.I. Kuncheva, J.C. Bezdek, and R.P.W. Duin, “Decision Templates for Multiple Classifier Fusion: An Experimental Comparison,” Pattern Recognition, vol. 34, pp. 299-414, Feb. 2001.
[12] L.I. Kuncheva, “A Theoretical Study on Six Classifier Fusion Strategies,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 281-286, Feb. 2002.
[13] E. Maeda and H. Murase, “Multi-Category Classification by Kernel Based Nonlinear Subspace Method,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing, 1999.
[14] K.R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Schölkopf, “An Introduction to Kernel-Based Learning Algorithm,” IEEE Trans. Neural Networks, vol. 12, no. 2, pp. 181-201, Mar. 2001.
[15] E. Oja, Subspace Methods of Pattern Recognition. Research Studies Press, 1983.
[16] H. Sakano, N. Mukawa, and T. Nakamura, “Kernel Mutual Subspace Method and Its Application for Object Recognition” IEICE Trans. Information & Systems, vol. J84-D-II, no. 8, pp. 1549-1556, Aug. 2001, in Japanese.
[17] B. Schölkopf, A.J. Smola, and K.-R. Muller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computer, vol. 10, pp. 1299-1319, 1998.
[18] A.J. Smola and B. Schölkopf, “Sparse Greedy Matrix Approximation for Machine Learning,” Proc. ICML '00, pp. 911-918, 2000.
[19] M. Tipping, “Sparse Kernel Principal Component Analysis,” Proc. Conf. Advances in Neural Information Processing Systems 13, pp. 633-639, 2001.
[20] K. Tsuda, “Subspace Method in the Hilbert Space,” IEICE Trans. Information & Systems, vol. J82-D-II, no. 4, pp. 592-599, Apr. 1999, in Japanese.
[21] C. Williams and M. Seeger, “Using the Nystrom Method to Speed Up Kernel Machines,” Advances in Neural Information Processing Systems, vol. 13, 2001.

Index Terms:
Kernel Principal Component Analysis (kPCA), kernel-based nonlinear subspace (KNS) method, prototype reduction schemes (PRS), classifier fusion strategies (CFS).
Citation:
Sang-Woon Kim, B. John Oommen, "On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 455-460, March 2005, doi:10.1109/TPAMI.2005.60
Usage of this product signifies your acceptance of the Terms of Use.