Issue No. 12 - Dec. (2012 vol. 34)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2012.50
Pingping Xiu , Microsoft Advertising R&D, Redmond, WA, USA
H. S. Baird , Dept. of Comput. Sci. & Eng., Lehigh Univ., Bethlehem, PA, USA
Whole-book recognition is a document image analysis strategy that operates on the complete set of a book's page images using automatic adaptation to improve accuracy. We describe an algorithm which expects to be initialized with approximate iconic and linguistic models-derived from (generally errorful) OCR results and (generally imperfect) dictionaries-and then, guided entirely by evidence internal to the test set, corrects the models which, in turn, yields higher recognition accuracy. The iconic model describes image formation and determines the behavior of a character-image classifier, and the linguistic model describes word-occurrence probabilities. Our algorithm detects “disagreements” between these two models by measuring cross entropy between 1) the posterior probability distribution of character classes (the recognition results resulting from image classification alone) and 2) the posterior probability distribution of word classes (the recognition results from image classification combined with linguistic constraints). We show how disagreements can identify candidates for model corrections at both the character and word levels. Some model corrections will reduce the error rate over the whole book, and these can be identified by comparing model disagreements, summed across the whole book, before and after the correction is applied. Experiments on passages up to 180 pages long show that when a candidate model adaptation reduces whole-book disagreement, it is also likely to correct recognition errors. Also, the longer the passage operated on by the algorithm, the more reliable this adaptation policy becomes, and the lower the error rate achieved. The best results occur when both the iconic and linguistic models mutually correct one another. We have observed recognition error rates driven down by nearly an order of magnitude fully automatically without supervision (or indeed without any user intervention or interaction). Improvement is nearly monotonic, and asymptotic accuracy is stable, even over long runs. If implemented naively, the algorithm runs in time quadratic in the length of the book, but random subsampling and caching techniques speed it up by two orders of magnitude with negligible loss of accuracy. Whole-book recognition has potential applications in digital libraries as a safe unsupervised anytime algorithm.
probability, cache storage, digital libraries, document image processing, image classification, image sampling, optical character recognition, unsupervised anytime algorithm, whole-book recognition, document image analysis strategy, automatic adaptation, approximate iconic models, linguistic models, OCR results, recognition accuracy, image formation, character-image classifier, word-occurrence probabilities, posterior probability distribution, whole-book disagreement, adaptation policy, caching techniques, random subsampling, digital libraries, Adaptation models, Pragmatics, Image recognition, Character recognition, Optical character recognition software, Error analysis, Computational modeling, cross entropy, Whole-book recognition, document image recognition, book recognition, style consistency, isogeny, adaptive classification, adaptive OCR, adaptive machine learning, model adaptation, anytime algorithm
Pingping Xiu, H. S. Baird, "Whole-Book Recognition", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 34, no. , pp. 2467-2480, Dec. 2012, doi:10.1109/TPAMI.2012.50