This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Learning Vector Quantization with Training Data Selection
January 2006 (vol. 28 no. 1)
pp. 157-162
In this paper, we propose a method that selects a subset of the training data points to update LVQ prototypes. The main goal is to conduct the prototypes to converge at a more convenient location, diminishing misclassification errors. The method selects an update set composed by a subset of points considered to be at the risk of being captured by another class prototype. We associate the proposed methodology to a weighted norm, instead of the Euclidean, in order to establish different levels of relevance for the input attributes. The technique was implemented on a controlled experiment and on Web available data sets.

[1] A. Gersho, “Asymptotically Optimal Block Quantization,” IEEE Trans. Information Theory, vol. 25, pp. 373-380, 1979.
[2] P. Zador, “Asymptotic Quantization Error of Continuous Signals and the Quantization Dimension,” IEEE Trans. Information Theory, vol. 28, pp. 139-149, 1982.
[3] R.A. Jonson and D.W. Wichern, Applied Multivariable Statistical Analysis. Prentice Hall, 1998.
[4] T. Kohonen, “Self-Organized Formation of Topologically Correct Feature Maps,” Biological Cybernetics, vol. 43, p. 59, 1982.
[5] T. Kohonen, Self-Organizing Maps, third ed. Springer, 2001.
[6] T. Kohonen, “An Introduction to Neural Computing,” Neural Networks 1, pp. 3-16, 1988.
[7] R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, second ed. John Wiley, 2001.
[8] L. Bottou, “Stochastic Learning,” Lecture Notes in Artificial Intelligence, vol. 3176, pp. 146-168, 2004.
[9] T. Kohonen, G. Barna, and R. Chrisley, “Statistical Pattern Recognition with Neural Networks: Benchmarking Studies,” Proc. IEEE Int'l Conf. Neural Networks, 1988.
[10] A.S. Sato and K. Yamada, “Generalized Learning Vector Quantization,” Advances in Neural Information Processing Systems, G. Tesauro, D. Touretzky, and T. Leon, eds., vol. 7, pp. 423-429, 1995.
[11] B. Hammer and T. Villmann, “Generalized Relevance Learning Vector Quantization,” Neural Networks 15, pp. 1059-1068, 2002.
[12] A.K. Qin and P.N. Suganthan, “Initialization Insensitive LVQ Algorithm Based on Cost-Function Adaptation,” Pattern Recognition, 2005.
[13] M.T. Vakil-Baghmisheh and N. Pavesi, “Premature Clustering Phenomenon and New Training Algorithms for LVQ,” Pattern Recognition, vol. 36, no. 8, pp. 1901-1912, 2003.
[14] S. Seo and K. Obermayer, “Soft Learning Vector Quantization,” Neural Computation, vol. 15, no. 7, pp. 1589-1604, 2003.
[15] N.H. Harmon, Modern Factor Analysis. Univ. of Chicago Press, 1967.
[16] E. Oja, “Principal Component Analysis,” The Handbook of Brain Theory and Neural Networks, M. Arbib, ed. pp. 753-756, MIT Press, 1995.
[17] M. Girolami, Self-Organising Neural Networks: Independent Component Analysis and Blind Source Separation. Springer, 1999.
[18] L. Breiman, J.H. Friedman, R.A. Olshen, and C. Stone, Classification and Regression Trees, Belmont, Calif.: Wadsworth, 1984.
[19] R. Setiono and H. Liu, “Neural Network Feature Selector,” IEEE Trans. Neural Networks, vol. 8, pp. 654-661, 1997.
[20] R. Battiti, “Using Mutual Information for Selecting Features in Supervised Neural Net Learning,” IEEE Trans. Neural Networks, vol. 5, pp. 537-550, 1994.
[21] N. Kwak and C. Choi, “Input Feature Selection for Classification Problems,” IEEE Trans. Neural Networks, vol. 13, no. 1, pp. 143-159, 2002.
[22] I. Gath and A.B. Geva, “Unsupervised Optimal Fuzzy Clustering,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 773-781, July 1989.
[23] V. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag, 1995.
[24] K. Crammer, R. Gilad-Bachrach, and A. Tishby, “Marging Analysis of the LVQ Algorithm,” Proc. 15th Ann. Conf. Neural Information Processing Systems, 2002.
[25] S. Haykin, Neural Networks: A Comprehensive Foundation. Prentice-Hall 1998.
[26] R.A. Fisher, “The Use of Multiple Measurements in Taxonomic Problems,” Ann. Eugenics, vol. 7, part II, pp. 179-188, 1936.
[27] R. Detrano, A. Janosi, W. Steinbrunn, M. Pfisterer, J. Schmid, S. Sandhu, K. Guppy, S. Lee, and V. Froelicher, “International Application of a New Probability Algorithm for the Diagnosis of Coronary Artery Disease,” Am. J. Cardiology, pp. 304-310, 1989.
[28] W.H. Wolberg and O.L. Mangasarian, “Multisurface Method of Pattern Separation for Medical Diagnosis Applied to Breast Cytology,” Proc. Nat'l Academy of Sciences, USA, vol. 87, pp. 9193-9196, 1990.
[29] O.L. Mangasarian, “Multisurface Method of Pattern Separation,” IEEE Trans. Information Theory, vol. 14, no. 6, pp. 801-807, Nov. 1968.
[30] F. Berzal, J.C. Cubero, F. Cuenca, and M. Martín-Bautista, “On the Quest for Easy-to-Understand Splitting Rules,” Data and Knowledge Eng., vol. 44, no. 1, pp. 31-48, 2003.

Index Terms:
Index Terms- Learning vector quantization LVQ, pattern classification, clustering, data selection, neural networks.
Citation:
Carlos E. Pedreira, "Learning Vector Quantization with Training Data Selection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 157-162, Jan. 2006, doi:10.1109/TPAMI.2006.14
Usage of this product signifies your acceptance of the Terms of Use.