The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.09 - September (1994 vol.16)
pp: 915-919
ABSTRACT
<p>Shows that systems built on a simple statistical technique and a large training database can be automatically optimized to produce classification accuracies of 99% in the domain of handwritten digits. It is also shown that the performance of these systems scale consistently with the size of the training database, where the error rate is cut by more than half for every tenfold increase in the size of the training set from 10 to 100,000 examples. Three distance metrics for the standard nearest neighbor classification system are investigated: a simple Hamming distance metric, a pixel distance metric, and a metric based on the extraction of penstroke features. Systems employing these metrics were trained and tested on a standard, publicly available, database of nearly 225,000 digits provided by the National Institute of Standards and Technology. Additionally, a confidence metric is both introduced by the authors and also discovered and optimized by the system. The new confidence measure proves to be superior to the commonly used nearest neighbor distance.</p>
INDEX TERMS
optical character recognition; learning (artificial intelligence); computer vision; handwritten character classification; nearest neighbor distance; simple statistical technique; large training database; classification accuracies; handwritten digits; error rate; distance metrics; standard nearest neighbor classification system; Hamming distance metric; pixel distance metric; penstroke features extraction; National Institute of Standards and Technology; confidence metric
CITATION
S.J. Smith, M.O. Bourgoin, K. Sims, H.L. Voorhees, "Handwritten Character Classification Using Nearest Neighbor in Large Databases", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.16, no. 9, pp. 915-919, September 1994, doi:10.1109/34.310689
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool