
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Thomas Martini Jørgensen, Christian Linneberg, "Theoretical Analysis and Improved Decision Criteria for the nTuple Classifier," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 4, pp. 336347, April, 1999.  
BibTex  x  
@article{ 10.1109/34.761264, author = {Thomas Martini Jørgensen and Christian Linneberg}, title = {Theoretical Analysis and Improved Decision Criteria for the nTuple Classifier}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {21}, number = {4}, issn = {01628828}, year = {1999}, pages = {336347}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.761264}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Theoretical Analysis and Improved Decision Criteria for the nTuple Classifier IS  4 SN  01628828 SP336 EP347 EPD  336347 A1  Thomas Martini Jørgensen, A1  Christian Linneberg, PY  1999 KW  ntuple classifier KW  maximum likelihood KW  Bayes KW  crossvalidation KW  RAMnet. VL  21 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
Abstract—The anticipated behavior of the ntuple classification system is that it gives the highest output score for the class to which the input example actually belongs. By performing a theoretical analysis of how the output scores are related to the underlying probability distributions of the data, this paper shows that this in general is not to be expected. The theoretical results are able to explain the behavior that is observed in experimental studies. The theoretical analysis also give valuable insight into how the ntuple classifier can be improved to deal with skewed training priors, which until now have been a hard problem for the architecture to tackle. It is shown that by relating an output score to the probability that a given class generates the data makes it possible to design the ntuple net to operate as a close approximation to the Bayes estimator. It is specifically illustrated that this approximation can be obtained by modifying the decision criteria. In real cases, the underlying example distributions are unknown and accordingly the optimum way to treat the output scores cannot be calculated theoretically. However, it is shown that the feasibility of performing leaveoneout crossvalidation tests in ntuple networks makes it possible to obtain proper processing of the scores in such cases.
[1] I. Aleksander, "Microcircuit Learning Nets: HammingDistance Behaviour," Electronic Letters, vol. 6, pp. 134136, 1970.
[2] I. Aleksander and H. Morton, An Introduction to Neural Computing.London: Chapman and Hall, 1990.
[3] I. Aleksander, W.V. Thomas, and P.A. Bowden, "Wisard: A Radical Step Forward in Image Recognition," Sensor Review, pp. 120124, 1984.
[4] A.W. Andersen, S.S. Christensen, T.M. Jorgensen, and C. Liisberg, "An Active Vision System for Robot Guidance Using a Low Cost Neural Network Board," Proc. Machine Vision Applications, Architectures, and Systems Integration,Boston, Mass., pp. 163170, 1994.
[5] J. Austin, RAMBased Neural Networks.London: World Scientific, 1998.
[6] W.W. Bledsoe and I. Browning, "Pattern Recognition and Reading by Machine," Proc. Eastern Joint Computer Conf., pp. 225232, 1959.
[7] W.W Bledsoe, "Further Results on the ntuple Pattern Recognition Method," IRE Trans. Electronic Computers (Correspondence), vol. EC10, p. 96, 1961.
[8] D.M. Jung, G. Nagy, and A. Shapira, “Ntuple Features for OCR Revisited,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 7, pp. 734745, July 1996.
[9] T.M. Jørgensen, "Classification of Handwritten Digits Using a RAM Neural Net Architecture," Int'l J. Neural Systems, vol. 8, no. 1, pp. 1725, 1997.
[10] T.M. Jørgensen, S.S. Christensen, A.W. Andersen, and C. Liisberg, "Optimization and Application of a RAM Based Neural Network for Fast Image Processing Tasks," Intelligent Robots and Computer Vision,Boston, Mass., pp. 328338, Oct 31 Nov2 1994.
[11] T.M. Jørgensen, S.S. Christensen, and C. Liisberg, "CrossValidation and Information Measures for RAM Based Neural Networks," RAMBased Neural Networks, J. Austin, ed, pp. 7888.London: World Scientific, 1998.
[12] J.V. Kennedy, J. Austin, R. Pack, and B. Cass, "CNNAP—A Parallel Processing Architecture for Binary Neural Networks," Proc. IEEE Int'l Conf. Neural Networks, vol. 2, 1995, pp. 1,0371,041.
[13] D. Michie, D.J. Spiegelhalter, and C.C. Taylor, Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.
[14] M. Morciniec and R. Rohwer, "GoodTuring Estimation for the Frequentist ntuple Classifier," Proc. Weightless Neural Network Workshop 1995, Computing with Logical Neurons, Univ. of Kent, pp. 9398, 1995.
[15] Nat'l Inst. Standards and Tech nology, NIST Special Database 19, Handprinted Forms and Characters Database, HFCD Rel. 211.1, 1995.
[16] R. Rohwer and M. Morciniec, "The Theoretical and Experimental Status of the ntuple Classifier", Neural Networks, vol. 11, no. 1, pp. 114, 1998.
[17] T.J. Stonham, "Improved HammingDistance Analysis for Digital Learning Networks," Electronics Letters, vol. 13, no. 6, pp. 155156, 1977.
[18] D.D. Wackerly, I. William, R.L. Mendenhall, and L. Scheaffer, Mathematical Statistics with Applications.Belmont: Duxbury Press, June 1996.
[19] L. Wehenkel, M. Pavella, E. Euxibie, and B. Heilbronn, "Decision Tree Based Transient Stability Assessment—A Case Study," Proc. IEEE/PES 1993 Winter Meeting,Columbus, Ohio, 1993.