Issue No. 09 - September (1994 vol. 16)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.310689
<p>Shows that systems built on a simple statistical technique and a large training database can be automatically optimized to produce classification accuracies of 99% in the domain of handwritten digits. It is also shown that the performance of these systems scale consistently with the size of the training database, where the error rate is cut by more than half for every tenfold increase in the size of the training set from 10 to 100,000 examples. Three distance metrics for the standard nearest neighbor classification system are investigated: a simple Hamming distance metric, a pixel distance metric, and a metric based on the extraction of penstroke features. Systems employing these metrics were trained and tested on a standard, publicly available, database of nearly 225,000 digits provided by the National Institute of Standards and Technology. Additionally, a confidence metric is both introduced by the authors and also discovered and optimized by the system. The new confidence measure proves to be superior to the commonly used nearest neighbor distance.</p>
optical character recognition; learning (artificial intelligence); computer vision; handwritten character classification; nearest neighbor distance; simple statistical technique; large training database; classification accuracies; handwritten digits; error rate; distance metrics; standard nearest neighbor classification system; Hamming distance metric; pixel distance metric; penstroke features extraction; National Institute of Standards and Technology; confidence metric
H. Voorhees, S. Smith, M. Bourgoin and K. Sims, "Handwritten Character Classification Using Nearest Neighbor in Large Databases," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 16, no. , pp. 915-919, 1994.