
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Salvador García, Joaquín Derrac, José Ramón Cano, Francisco Herrera, "Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 417435, March, 2012.  
BibTex  x  
@article{ 10.1109/TPAMI.2011.142, author = {Salvador García and Joaquín Derrac and José Ramón Cano and Francisco Herrera}, title = {Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {34}, number = {3}, issn = {01628828}, year = {2012}, pages = {417435}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2011.142}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study IS  3 SN  01628828 SP417 EP435 EPD  417435 A1  Salvador García, A1  Joaquín Derrac, A1  José Ramón Cano, A1  Francisco Herrera, PY  2012 KW  Prototype selection KW  nearest neighbor KW  taxonomy KW  condensation KW  edition KW  classification. VL  34 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] T.M. Cover and P.E. Hart, “Nearest Neighbor Pattern Classification,” IEEE Trans. Information Theory, vol. 13, no. 1, pp. 2127, Jan. 1967.
[2] I. Kononenko and M. Kukar, Machine Learning and Data Mining: Introduction to Principles and Algorithms. Horwood Publishing Limited, 2007.
[3] A.N. Papadopoulos and Y. Manolopoulos, Nearest Neighbor Search: A Database Perspective. Springer, 2004.
[4] G. Shakhnarovich, T. Darrell, and P. Indyk, NearestNeighbor Methods in Learning and Vision: Theory and Practice, G. Shakhnarovich, T. Darrell, and P. Indyk, eds. MIT Press, 2006.
[5] X. Wu and V. Kumar, The Top Ten Algorithms in Data Mining, X. Wu and V. Kumar, eds. Chapman & Hall/CRC Data Mining and Knowledge Discovery, 2009.
[6] D.W. Aha, Lazy Learning, D.W. Aha, ed. Springer, 1997.
[7] E.K. Garcia, S. Feldman, M.R. Gupta, and S. Srivastava, “Completely Lazy Learning,” IEEE Trans. Knowledge and Data Eng., vol. 22, no. 9, pp. 12741285, Sept. 2010.
[8] T.M. Mitchell, Machine Learning. McGrawHill, 1997.
[9] A.K. Ghosh, “On Optimum Choice of k in Nearest Neighbor Classification,” Computational Statistics & Data Analysis, vol. 50, no. 11, pp. 31133123, 2006.
[10] T. Hastie and R. Tibshirani, “Discriminant Adaptive Nearest Neighbor Classification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 607616, June 1996.
[11] A.K. Ghosh and K. Anil, “On Nearest Neighbor Classification Using Adaptive Choice of k,” J. Computational & Graphical Statistics, vol. 16, no. 2, pp. 482502, 2007.
[12] C. Domeniconi, J. Peng, and D. Gunopulos, “Locally Adaptive Metric NearestNeighbor Classification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 12811285, Sept. 2002.
[13] J. Yu, J. Amores, N. Sebe, P. Radeva, and Q. Tian, “Distance Learning for Similarity Estimation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 451462, Mar. 2008.
[14] A. Argentini and E. Blanzieri, “About Neighborhood Counting Measure Metric and Minimum Risk Metric,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 763765, Apr. 2010.
[15] P. Cunningham, “A Taxonomy of Similarity Mechanisms for CaseBased Reasoning,” IEEE Trans. Knowledge and Data Eng., vol. 21, no. 11, pp. 15321543, Nov. 2009.
[16] C.M. Hsu and M.S. Chen, “On the Design and Applicability of Distance Functions in HighDimensional Data Space,” IEEE Trans. Knowledge and Data Eng., vol. 21, no. 4, pp. 523536, Apr. 2009.
[17] B.K. Kim and S.B. Park, “A Fast k Nearest Neighbor Finding Algorithm Based on the Ordered Partition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 761766, Nov. 1986.
[18] S.A. Nene and S.K. Nayar, “A Simple Algorithm for Nearest Neighbor Search in High Dimensions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 9, pp. 9891003, Sept. 1997.
[19] H. Samet, “KNearest Neighbor Finding Using Maxnearestdist,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 243252, Feb. 2008.
[20] P. Grother, G.T. Candela, and J.L. Blue, “Fast Implementations of NearestNeighbor Classifiers,” Pattern Recognition, vol. 30, no. 3, pp. 459465, 1997.
[21] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, and A.Y. Wu, “An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions,” J. ACM, vol. 45, no. 6, pp. 891923, 1998.
[22] A. Andoni and P. Indyk, “NearOptimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions,” Comm. ACM, vol. 51, no. 1, pp. 117122, 2008.
[23] D.R. Wilson and T.R. Martinez, “Reduction Techniques for InstanceBased Learning Algorithms,” Machine Learning, vol. 38, no. 3, pp. 257286, 2000.
[24] N. Jankowski and M. Grochowski, “Comparison of Instances Selection Algorithms I. Algorithms Survey,” Proc. Int'l Conf. Artificial Intelligence and Soft Computing, pp. 598603, 2004.
[25] E. Pekalska, R.P.W. Duin, and P. Paclík, “Prototype Selection for DissimilarityBased Classifiers,” Pattern Recognition, vol. 39, no. 2, pp. 189208, 2006.
[26] W. Lam, C.K. Keung, and D. Liu, “Discovering Useful Concept Prototypes for Classification Based on Filtering and Abstraction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 8, pp. 10751090, Aug. 2002.
[27] M. Lozano, J.M. Sotoca, J.S. Sánchez, F. Pla, E. Pekalska, and R.P.W. Duin, “Experimental Study on Prototype Optimisation Algorithms for PrototypeBased Classification in Vector Spaces,” Pattern Recognition, vol. 39, no. 10, pp. 18271838, 2006.
[28] F. Angiulli, “Fast Nearest Neighbor Condensation for Large Data Sets Classification,” IEEE Trans. Knowledge and Data Eng., vol. 19, no. 11, pp. 14501464, Nov. 2007.
[29] J.C. Bezdek and L.I. Kuncheva, “Nearest Prototype Classifier Designs: An Experimental Study,” Int'l J. Intelligent Systems, vol. 16, pp. 14451473, 2001.
[30] S.W. Kim and J. Oomenn, “A Brief Taxonomy and Ranking of Creative Prototype Reduction Schemes.” Pattern Analysis and Applications, vol. 6, pp. 232244, 2003.
[31] J.R. Cano, F. Herrera, and M. Lozano, “Stratification for Scaling up Evolutionary Prototype Selection,” Pattern Recognition Letters, vol. 26, no. 7, pp. 953963, 2005.
[32] C.L. Chang, “Finding Prototypes for Nearest Neighbor Classifiers,” IEEE Trans. Computers, vol. 23, no. 11, pp. 11791184, Nov. 1974.
[33] T. Kohonen, “The Self Organizing Map,” Proc. IEEE, vol. 78, no. 9, pp. 14641480, Sept. 1990.
[34] A. Cervantes, I.M. Galván, and P. Isasi, “AMPSO: A New Particle Swarm Method for Nearest Neighborhood Classification,” IEEE Trans. Systems, Man, and Cybernetics: Part B, Cybernetics, vol. 39, no. 5, pp. 10821091, Oct. 2009.
[35] I. Triguero, S. García, and F. Herrera, “IPADE: Iterative Prototype Adjustment for Nearest Neighbor Classification,” IEEE Trans. Neural Networks, vol. 21, no. 12, pp. 19841990, Dec. 2010.
[36] I. Triguero, S. García, and F. Herrera, “Differential Evolution for Optimizing the Positioning of Prototypes in Nearest Neighbor Classification,” Pattern Recognition, vol. 44, pp. 901916, 2011.
[37] P. Domingos, “Unifying InstanceBased and RuleBased Induction,” Machine Learning, vol. 24, no. 2, pp. 141168, 1996.
[38] O. Luaces and A. Bahamonde, “Inflating Examples to Obtain Rules,” Int'l J. Intelligent Systems, vol. 18, pp. 11131143, 2003.
[39] S. García, J. Derrac, J. Luengo, C.J. Carmona, and F. Herrera, “Evolutionary Selection of Hyperrectangles in Nested Generalized Exemplar Learning,” Applied Soft Computing, vol. 11, pp. 30323045, 2011.
[40] D.B. Skalak, “Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms,” Proc. 11th Int'l Conf. Machine Learning, pp. 293301, 1994.
[41] J. Derrac, S. García, and F. Herrera, “IFSCoCo: Instance and Feature Selection Based on Cooperative Coevolution with Nearest Neighbor Rule,” Pattern Recognition, vol. 43, no. 6, pp. 20822105, 2010.
[42] D. Wettschereck, D.W. Aha, and T. Mohri, “A Review and Empirical Evaluation of Feature Weighting Methods for a Class of Lazy Learning Algorithms,” Artificial Intelligence Rev., vol. 11, nos. 15, pp. 273314, 1997.
[43] R. Paredes and E. Vidal, “Learning Weighted Metrics to Minimize NearestNeighbor Classification Error,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 7, pp. 11001110, July 2006.
[44] F. Fernández and P. Isasi, “Local Feature Weighting in Nearest Prototype Classification,” IEEE Trans. Neural Networks, vol. 19, no. 1, pp. 4053, Jan. 2008.
[45] E. Alpaydin, “Voting over Multiple Condensed Nearest Neighbors,” Artificial Intelligence Rev., vol. 11, nos. 15, pp. 115132, 1997.
[46] N. GarcíaPedrajas, “Constructing Ensembles of Classifiers by Means of Weighted Instance Selection,” IEEE Trans. Neural Networks, vol. 20, no. 2, pp. 258277, Feb. 2009.
[47] D.R. Wilson and T.R. Martinez, “Improved Heterogeneous Distance Functions,” J. Artificial Intelligence Research, vol. 6, pp. 134, 1997.
[48] R. Paredes and E. Vidal, “Learning Prototypes and Distances: A Prototype Reduction Technique Based on Nearest Neighbor Error Minimization,” Pattern Recognition, vol. 39, no. 2, pp. 180188, 2006.
[49] C. Gagné and M. Parizeau, “Coevolution of Nearest Neighbor Classifiers,” Int'l J. Pattern Recognition and Artificial Intelligence, vol. 21, no. 5, pp. 921946, 2007.
[50] S.W. Kim and B.J. Oommen, “On Using Prototype Reduction Schemes to Optimize DissimilarityBased Classification,” Pattern Recognition, vol. 40, no. 11, pp. 29462957, 2007.
[51] A. HaroGarcía and N. GarcíaPedrajas, “A DivideandConquer Recursive Approach for Scaling Up Instance Selection Algorithms,” Data Mining and Knowledge Discovery, vol. 18, no. 3, pp. 392418, 2009.
[52] C. GarcíaOsorio, A. de HaroGarcía, and N. GarcíaPedrajas, “Democratic Instance Selection: A Linear Complexity Instance Selection Algorithm Based on Classifier Ensemble Concepts,” Artificial Intelligence, vol. 174, nos. 5/6, pp. 410441, 2010.
[53] F. Angiulli and G. Folino, “Distributed Nearest NeighborBased Condensation of Very Large Data Sets,” IEEE Trans. Knowledge and Data Eng., vol. 19, no. 12, pp. 15931606, Dec. 2007.
[54] J.R. Cano, F. Herrera, and M. Lozano, “Evolutionary Stratified Training Set Selection for Extracting Classification Rules with Trade Off PrecisionInterpretability,” Data and Knowledge Eng., vol. 60, no. 1, pp. 90108, 2007.
[55] K.J. Kim, “Artificial Neural Networks with Evolutionary Instance Selection for Financial Forecasting,” Expert Systems with Applications, vol. 30, no. 3, pp. 519526, 2006.
[56] J.R. Cano, F. Herrera, M. Lozano, and S. García, “Making CN2SD Subgroup Discovery Algorithm Scalable to Large Size Data Sets Using Instance Selection,” Expert Systems with Applications, vol. 35, no. 4, pp. 19491965, 2008.
[57] J.R. Cano, S. García, and F. Herrera, “Subgroup Discover in Large Size Data Sets Preprocessed Using Stratified Instance Selection for Increasing the Presence of Minority Classes,” Pattern Recognition Letters, vol. 29, no. 16, pp. 21562164, 2008.
[58] Y. Chen, J. Bi, and J.Z. Wang, “MILES: MultipleInstance Learning via Embedded Instance Selection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 19311947, Dec. 2006.
[59] Z. Fu, A. RoblesKelly, and J. Zhou, “MILIS: Multiple Instance Learning with Instance Selection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 958977, May 2011.
[60] N.V. Chawla, D.A. Cieslak, L.O. Hall, and A. Joshi, “Automatically Countering Imbalance and Its Empirical Relationship to Cost,” Data Mining and Knowledge Discovery, vol. 17, no. 2, pp. 225252, 2008.
[61] G.E.A.P.A. Batista, R.C. Prati, and M.C. Monard, “A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data,” ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 2029, 2004.
[62] S. García and F. Herrera, “Evolutionary UnderSampling for Classification with Imbalanced Data Sets: Proposals and Taxonomy,” Evolutionary Computation, vol. 17, no. 3, pp. 275306, 2009.
[63] S.W. Kim and B.J. Oommen, “On Using Prototype Reduction Schemes to Enhance the Computation of VolumeBased InterClass Overlap Measures,” Pattern Recognition, vol. 42, no. 11, pp. 26952704, 2009.
[64] R.A. Mollineda, J.S. Sánchez, and J.M. Sotoca, “Data Characterization for Effective Prototype Selection,” Proc. Second Iberian Conf. Pattern Recognition and Image Analysis, pp. 2734, 2005.
[65] S. García, J.R. Cano, E. BernadóMansilla, and F. Herrera, “Diagnose of Effective Evolutionary Prototype Selection Using an Overlapping Measure,” Int'l J. Pattern Recognition and Artificial Intelligence, vol. 23, no. 8, pp. 15271548, 2009.
[66] P.E. Hart, “The Condensed Nearest Neighbor Rule,” IEEE Trans. Information Theory, vol. 14, no. 3, pp. 515516, May 1968.
[67] G.W. Gates, “The Reduced Nearest Neighbor Rule,” IEEE Trans. Information Theory, vol. 18, no. 3, pp. 431433, May 1972.
[68] D.L. Wilson, “Asymptotic Properties of Nearest Neighbor Rules Using Edited Data,” IEEE Trans. Systems, Man, and Cybernetics, vol. 2, no. 3, pp. 408421, July 1972.
[69] J.R. Ullmann, “Automatic Selection of Reference Data for Use in a NearestNeighbor Method of Pattern Classification,” IEEE Trans. Information Theory, vol. 20, no. 4, pp. 541543, July 1974.
[70] G.L. Ritter, H.B. Woodruff, S.R. Lowry, and T.L. Isenhour, “An Algorithm for a Selective Nearest Neighbor Decision Rule,” IEEE Trans. Information Theory, vol. 21, no. 6, pp. 665669, Nov. 1975.
[71] I. Tomek, “An Experiment with the Edited NearestNeighbor Rule,” IEEE Trans. Systems, Man, and Cybernetics, vol. 6, no. 6, pp. 448452, June 1976.
[72] I. Tomek, “Two Modifications of CNN,” IEEE Trans. Systems, Man, and Cybernetics, vol. 6, no. 6, pp. 769772, Nov. 1976.
[73] K.C. Gowda and G. Krishna, “The Condensed Nearest Neighbor Rule Using the Concept of Mutual Nearest Neighborhood,” IEEE Trans. Information Theory, vol. 25, no. 4, pp. 488490, July 1979.
[74] P.A. Devijver and J. Kittler, Pattern Recognition, A Statistical Approach. Prentice Hall, 1982.
[75] P.A. Devijver, “On the Editing Rate of The Multiedit Algorithm,” Pattern Recognition Letters, vol. 4, pp. 912, 1986.
[76] D. Kibler and D.W. Aha, “Learning Representative Exemplars of Concepts: An Initial Case Study,” Proc. Fourth Int'l Workshop Machine Learning, pp. 2430, 1987.
[77] D.W. Aha, D. Kibler, and M.K. Albert, “InstanceBased Learning Algorithms,” Machine Learning, vol. 6, no. 1, pp. 3766, 1991.
[78] B.V. Dasarathy, “Minimal Consistent Set (MCS) Identification for Optimal Nearest Neighbor Decision System Design,” IEEE Trans. Systems, Man, and Cybernetics, vol. 24, no. 3, pp. 511517, Mar. 1994.
[79] R.M. CameronJones, “Instance Selection by Encoding Length Heuristic with Random Mutation Hill Climbing,” Proc. Eighth Australian Joint Conf. Artificial Intelligence, pp. 99106, 1995.
[80] C.E. Brodley, “Recursive Automatic Bias Selection for Classifier Construction,” Machine Learning, vol. 20, nos. 1/2, pp. 6394, 1995.
[81] D.G. Lowe, “Similarity Metric Learning for a VariableKernel Classifier,” Neural Computation, vol. 7, no. 1, pp. 7285, 1995.
[82] J.S. Sánchez, F. Pla, and F.J. Ferri, “Prototype Selection for the Nearest Neighbor Rule through Proximity Graphs,” Pattern Recognition Letters, vol. 18, pp. 507513, 1997.
[83] U. Lipowezky, “Selection of the Optimal Prototype Subset for 1nn Classification,” Pattern Recognition Letters, vol. 19, no. 10, pp. 907918, 1998.
[84] L.I. Kuncheva, “Editing for the kNearest Neighbors Rule by a Genetic Algorithm,” Pattern Recognition Letters, vol. 16, no. 8, pp. 809814, 1995.
[85] L.I. Kuncheva and L.C. Jain, “Nearest Neighbor Classifier: Simultaneous Editing and Feature Selection,” Pattern Recognition Letters, vol. 20, nos. 1113, pp. 11491156, 1999.
[86] K. Hattori and M. Takahashi, “A New Edited KNearest Neighbor Rule in the Pattern Classification Problem,” Pattern Recognition, vol. 33, no. 3, pp. 521528, 2000.
[87] B. Sierra, E. Lazkano, I. Inza, M. Merino, P. Larrañaga, and J. Quiroga, “Prototype Selection and Feature Subset Selection by Estimation of Distribution Algorithms, A Case Study in the Survival of Cirrhotic Patients Treated with TIPS,” Proc. Eighth Conf. AI in Medicine in Europe, pp. 2029, 2001.
[88] V. Cerverón and F.J. Ferri, “Another Move toward the Minimum Consistent Subset: A Tabu Search Approach to the Condensed Nearest Neighbor Rule,” IEEE Trans. Systems, Man, and Cybernetics, Part B, vol. 31, no. 3, pp. 408413, June 2001.
[89] H. Brighton and C. Mellish, “Advances in Instance Selection for InstanceBased Learning Algorithms,” Data Mining and Knowledge Discovery, vol. 6, no. 2, pp. 153172, 2002.
[90] V.S. Devi and M.N. Murty, “An Incremental Prototype Set Building Technique,” Pattern Recognition, vol. 35, no. 2, pp. 505513, 2002.
[91] S.Y. Ho, C.C. Liu, and S. Liu, “Design of an Optimal Nearest Neighbor Classifier Using an Intelligent Genetic Algorithm,” Pattern Recognition Letters, vol. 23, no. 13, pp. 14951503, 2002.
[92] M. Sebban and R. Nock, “Instance Pruning as an Information Preserving Problem,” Proc. 17th Int'l Conf. Machine Learning, pp. 855862, 2000.
[93] M. Sebban, R. Nock, E. Brodley, and A. Danyluk, “Stopping Criterion for BoostingBased Data Reduction Techniques: From Binary to Multiclass Problems,” J. Machine Learning Research, vol. 3, pp. 863885, 2002.
[94] Y. Wu, K.G. Ianakiev, and V. Govindaraju, “Improved KNearest Neighbor Classification,” Pattern Recognition, vol. 35, no. 10, pp. 23112318, 2002.
[95] H. Zhang and G. Sun, “Optimal Reference Subset Selection for Nearest Neighbor Classification by Tabu Search,” Pattern Recognition, vol. 35, no. 7, pp. 14811490, 2002.
[96] M.T. Lozano, J.S. Sánchez, and F. Pla, “Using the Geometrical Distribution of Prototypes for Training Set Condensing,” Proc. Conf. Spanish Assoc. for Artificial Intelligence, pp. 618627, 2003.
[97] K.P. Zhao, S.G. Zhou, J.H. Guan, and A.Y. Zhou, “CPruner: An Improved Instance Pruning Algorithm,” Proc. Second Int'l Conf. Machine Learning and Cybernetics, pp. 9499, 2003.
[98] J.R. Cano, F. Herrera, and M. Lozano, “Using Evolutionary Algorithms as Instance Selection for Data Reduction in KDD: An Experimental Study,” IEEE Trans. Evolutionary Computation, vol. 7, no. 6, pp. 561575, Dec. 2003.
[99] J.C. Riquelme, J.S. AguilarRuiz, and M. Toro, “Finding Representative Patterns with Ordered Projections,” Pattern Recognition, vol. 36, no. 4, pp. 10091018, 2003.
[100] J.S. Sánchez, R. Barandela, A.I. Marqués, R. Alejo, and J. Badenas, “Analysis of New Techniques to Obtain Quality Training Sets,” Pattern Recognition Letters, vol. 24, no. 7, pp. 10151022, 2003.
[101] F. Vázquez, J.S. Sánchez, and F. Pla, “A Stochastic Approach to Wilson's Editing Algorithm,” Proc. Second Iberian Conf. Pattern Recognition and Image Analysis, pp. 3542, 2005.
[102] Y. Li, Z. Hu, Y. Cai, and W. Zhang, “Support Vector Based Prototype Selection Method for Nearest Neighbor Rules,” Proc. First Int'l Conf. Advances in Natural Computation, pp. 528535, 2005.
[103] J.A. OlveraLópez, J.F. MartínezTrinidad, and J.A. CarrascoOchoa, “Edition Schemes Based on BSE,” Proc. 10th Iberoam. Congress on Pattern Recognition, pp. 360367, 2005.
[104] R. Barandela, F.J. Ferri, and J.S. Sánchez, “Decision Boundary Preserving Prototype Selection for Nearest Neighbor Classification,” Int'l J. Pattern Recognition and Artificial Intelligence, vol. 19, no. 6, pp. 787806, 2005.
[105] F. Chang, C.C. Lin, and C.J. Lu, “Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies,” J. Machine Learning Research, vol. 7, pp. 21252148, 2006.
[106] X.Z. Wang, B. Wu, Y.L. He, and X.H. Pei, “NRMCS: Noise Removing Based on the MCS,” Proc. Seventh Int'l Conf. Machine Learning and Cybernetics, pp. 8993, 2008.
[107] R. GilPita and X. Yao, “Evolving Edited KNearest Neighbor Classifiers,” Int'l J. Neural Systems, vol. 18, no. 6, pp. 459467, 2008.
[108] S. García, J.R. Cano, and F. Herrera, “A Memetic Algorithm for Evolutionary Prototype Selection: A Scaling Up Approach,” Pattern Recognition, vol. 41, no. 8, pp. 26932709, 2008.
[109] E. Marchiori, “Hit Miss Networks with Applications to Instance Selection,” J. Machine Learning Research, vol. 9, pp. 9971017, 2008.
[110] H.A. Fayed and A.F. Atiya, “A Novel Template Reduction Approach for the KNearest Neighbor Method,” IEEE Trans. Neural Networks, vol. 20, no. 5, pp. 890896, May 2009.
[111] J.A. OlveraLópez, J.A. CarrascoOchoa, and J.F. MartínezTrinidad, “A New Fast Prototype Selection Method Based on Clustering,” Pattern Analysis and Applications, vol. 13, no. 2, pp. 131141, 2010.
[112] E. Marchiori, “Class Conditional Nearest Neighbor for Large Margin Instance Selection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 364370, Feb. 2010.
[113] N. GarcíaPedrajas, J.A. Romero Del Castillo, and D. OrtizBoyer, “A Cooperative Coevolutionary Algorithm for Instance Selection for InstanceBased Learning,” Machine Learning, vol. 78, no. 3, pp. 381420, 2010.
[114] J. AlcaláFdez, L. Sánchez, S. García, M.J. del Jesus, S. Ventura, J.M. Garrell, J. Otero, C. Romero, J. Bacardit, V.M. Rivas, J.C. Fernández, and F. Herrera, “KEEL: A Software Tool to Assess Evolutionary Algorithms for Data Mining Problems,” Soft Computing, vol. 13, no. 3, pp. 307318, 2009.
[115] A. Asuncion and D. Newman, “UCI Machine Learning Repository,” http://www.ics.uci.edu/mlearnMLRepository.html , 2007.
[116] J.A. Cohen, “Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measurement, vol. 20, pp. 3746, 1960.
[117] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2005.
[118] A. BenDavid, “A Lot of Randomness Is Hiding in Accuracy,” Eng. Applications of Artificial Intelligence, vol. 20, pp. 875885, 2007.
[119] J. Demšar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” J. Machine Learning Research, vol. 7, pp. 130, 2006.
[120] S. García and F. Herrera, “An Extension on 'Statistical Comparisons of Classifiers over Multiple Data Sets' for All Pairwise Comparisons,” J. Machine Learning Research, vol. 9, pp. 26772694, 2008.
[121] S. García, A. Fernández, J. Luengo, and F. Herrera, “Advanced Nonparametric Tests for Multiple Comparisons in the Design of Experiments in Computational Intelligence and Data Mining: Experimental Analysis of Power,” Information Sciences, vol. 180, no. 10, pp. 20442064, 2010.
[122] F. Wilcoxon, “Individual Comparisons by Ranking Methods,” Biometrics Bull., vol. 1, pp. 8083, 1945.