This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Fast Nearest Neighbor Condensation for Large Data Sets Classification
November 2007 (vol. 19 no. 11)
pp. 1450-1464
This work has two main objectives, namely, to introduce a novel algorithm, called the Fast Condensed Nearest Neighbor (FCNN) rule, for computing a training set consistent subset for the nearest neighbor decision rule, and to show that condensation algorithms for the nearest neighbor rule can be applied to huge collections of data. The FCNN rule has some interesting properties: it is order independent, its worst case time complexity is quadratic but often with a small constant pre-factor, and it is likely to select points very close to the decision boundary. Furthermore, its structure allows for the triangular inequality to be effectively exploited to reduce the computational effort. The FCNN rule outperformed even here enhanced variants of existing competence preservation methods both in terms of learning speed and learning scaling behavior, and often in terms of the size of the model, while it guaranteed the same prediction accuracy. Furthermore, it was three order of magnitude faster than hybrid instance-based learning algorithms on the MNIST and MIT Face databases and computed a model of accuracy comparable to that of methods incorporating a noise filtering pass.

[1] D.W. Aha, “Editorial,” Artificial Intelligence Rev., special issue on lazy learning, vol. 11, nos. 1-5, pp. 7-10, 1997.
[2] D.W. Aha, D. Kibler, and M.K. Albert, “Instance-Based Learning Algorithms,” Machine Learning, vol. 6, pp. 37-66, 1991.
[3] E. Alpaydin, “Voting over Multiple Condensed Nearest Neighbors,” Artificial Intelligence Rev., vol. 11, pp. 115-132, 1997.
[4] F. Angiulli, “Fast Condensed Nearest Neighbor Rule,” Proc. 22nd Int'l Conf. Machine Learning (ICML '05), pp. 25-32, 2005.
[5] S. Bay, “Combining Nearest Neighbor Classifiers through Multiple Feature Subsets,” Proc. 15th Int'l Conf. Machine Learning (ICML '98), 1998.
[6] S. Bay, “Nearest Neighbor Classification from Multiple Feature Sets,” Intelligent Data Analysis, vol. 3, pp. 191-209, 1999.
[7] B. Bhattacharya and D. Kaller, “Reference Set Thinning for the $k$ -Nearest Neighbor Decision Rule,” Proc. 14th Int'l Conf. Pattern Recognition (ICPR '98), 1998.
[8] H. Brighton and C. Mellish, “Advances in Instance Selection for Instance-Based Learning Algorithms,” Data Mining and Knowledge Discovery, vol. 6, no. 2, pp. 153-172, 2002.
[9] R.M. Cameron-Jones, “Instance Selection by Encoding Length Heuristic with Random Mutation Hill Climbing,” Proc. Eighth Australian Joint Conf. Artificial Intelligence, pp. 99-106, 1995.
[10] T.M. Cover and P.E. Hart, “Nearest Neighbor Pattern Classification,” IEEE Trans. Information Theory, vol. 13, no. 1, pp. 21-27, 1967.
[11] B. Dasarathy, Nearest Neighbor (NN) Norms-NN Pattern Classification Techniques. IEEE CS Press, 1991.
[12] B. Dasarathy, “Minimal Consistent Subset (MCS) Identification for Optimal Nearest Neighbor Decision Systems Design,” IEEE Trans. Systems, Man, and Cybernetics, vol. 24, no. 3, pp. 511-517, 1994.
[13] B. Dasarathy, “Nearest Unlike Neighbor (NUN): An Aid to Decision Confidence Estimation,” Optical Eng., vol. 34, pp. 2785-2792, 1995.
[14] F.S. Devi and M.N. Murty, “An Incremental Prototype Set Building Technique,” Pattern Recognition, vol. 35, no. 2, pp. 505-513, 2002.
[15] P. Devijver and J. Kittler, “On the Edited Nearest Neighbor Rule,” Proc. Fifth Int'l Conf. Pattern Recognition (ICPR '80), pp. 72-80, 1980.
[16] L. Devroye, “On the Inequality of Cover and Hart in Nearest Neighbor Discrimination,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 3, pp. 75-78, 1981.
[17] L. Devroye, L. Gyorfy, and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer, 1996.
[18] K. Fukunaga and L.D. Hostetler, “$k$ -Nearest-Neighbor Bayes-Risk Estimation,” IEEE Trans. Information Theory, vol. 21, pp. 285-293, 1975.
[19] V. Gaede and O. Günther, “Multidimensional Access Methods,” ACM Computing Surveys, vol. 30, no. 2, pp. 170-231, 1998.
[20] W. Gates, “The Reduced Nearest Neighbor Rule,” IEEE Trans. Information Theory, vol. 18, no. 3, pp. 431-433, 1972.
[21] P.E. Hart, “The Condensed Nearest Neighbor Rule,” IEEE Trans. Information Theory, vol. 14, no. 3, pp. 515-516, 1968.
[22] B. Karaçali and H. Krim, “Fast Minimization of Structural Risk by Nearest Neighbor Rule,” IEEE Trans. Neural Networks, vol. 14, no. 1, pp. 127-134, 2003.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[24] C.L. Liu and M. Nakagawa, “Evaluation of Prototype Learning Algorithms for Nearest-Neighbor Classifier in Application to Handwritten Character Recognition,” Pattern Recognition, vol. 34, no. 3, pp. 601-615, 2001.
[25] G.L. Ritter, H.B. Woodruff, S.R. Lowry, and T.L. Isenhour, “An Algorithm for a Selective Nearest Neighbor Decision Rule,” IEEE Trans. Information Theory, vol. 21, pp. 665-669, 1975.
[26] C. Stanfill and D. Waltz, “Towards Memory-Based Reasoning,” Comm. ACM, vol. 29, pp. 1213-1228, 1994.
[27] C. Stone, “Consistent Nonparametric Regression,” Annals of Statistics, vol. 8, pp. 1348-1360, 1977.
[28] G. Toussaint, “Proximity Graphs for Nearest Neighbor Decision Rules: Recent Progress,” Proc. 34th Symp. Interface of Computing Science and Statistics (Interface '02), Apr. 2002.
[29] I. Watson and F. Marir, “Case-Based Reasoning: A Review,” The Knowledge Eng. Rev., vol. 9, no. 4, 1994.
[30] G. Wilfong, “Nearest Neighbor Problems,” Int'l J. Computational Geometry & Applications, vol. 2, no. 4, pp. 383-416, 1992.
[31] D.L. Wilson, “Asymptotic Properties of Nearest Neighbor Rules Using Edited Data,” IEEE Trans. Systems, Man, and Cybernetics, vol. 2, pp. 408-420, 1972.
[32] D.R. Wilson and T.R. Martinez, “Reduction Techniques for Instance-Based Learning Algorithms,” Machine Learning, vol. 38, no. 3, pp. 257-286, 2000.

Index Terms:
Clustering, classification, and association rules, Data mining
Citation:
Fabrizio Angiulli, "Fast Nearest Neighbor Condensation for Large Data Sets Classification," IEEE Transactions on Knowledge and Data Engineering, vol. 19, no. 11, pp. 1450-1464, Nov. 2007, doi:10.1109/TKDE.2007.190645
Usage of this product signifies your acceptance of the Terms of Use.