Leave-One-Out-Training and Leave-One-Out-Testing Hidden Markov Models for a Handwritten Numeral Recognizer: The Implications of a Single Classifier and Multiple Classifications
Issue No. 12 - December (2009 vol. 31)
Robert Sabourin , École de Technologie Supérieure, Montréal
Paulo Rodrigo Cavalin , Génie de la Production Automatisée (GPA), Montréal
Alceu de Souza Britto , Informática Aplicada (PPGIa-PUCPR), Curitiba
Albert Hung-Ren Ko , University of Toronto, Toronto
Hidden Markov Models (HMMs) have been shown to be useful in handwritten pattern recognition. However, owing to their fundamental structure, they have little resistance to unexpected noise among observation sequences. In other words, unexpected noise in a sequence might “ break” the normal transmission of states for this sequence, making it unrecognizable to trained models. To resolve this problem, we propose a leave-one-out-training strategy, which will make the models more robust. We also propose a leave-one-out-testing method, which will compensate for some of the negative effects of this noise. The latter is actually an example of a system with a single classifier and multiple classifications. Compared with the 98.00 percent accuracy of the benchmark HMMs, the new system achieves a 98.88 percent accuracy rate on handwritten digits.
Hidden Markov Models, ensemble of classifiers, sequence, noise, leave one out, pattern recognition.
Robert Sabourin, Paulo Rodrigo Cavalin, Alceu de Souza Britto, Albert Hung-Ren Ko, "Leave-One-Out-Training and Leave-One-Out-Testing Hidden Markov Models for a Handwritten Numeral Recognizer: The Implications of a Single Classifier and Multiple Classifications", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 2168-2178, December 2009, doi:10.1109/TPAMI.2008.254