Issue No. 11 - November (2007 vol. 29)
This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain sufficiently readable text or to facilitate optical character recognition (OCR), our work targets nontextual documents where the original printed content is desired. To achieve this goal, our framework acquires a 3D scan of the document’s surface together with a high-resolution image. Conformal mapping is used to rectify geometric distortion by mapping the 3D surface back to a plane while minimizing angular distortion. This conformal "deskewing" assumes no parametric model of the document’s surface and is suitable for arbitrary distortions. Illumination correction is performed by using the 3D shape to distinguish content gradient edges from illumination gradient edges in the high-resolution image. Integration is performed using only the content edges to obtain a reflectance image with significantly less illumination artifacts. This approach makes no assumptions about light sources and their positions. The results from the geometric and photometric correction are combined to produce the final output.
Document Restoration, Geometric Correction, Shading Correction, Shading Correction, Photometric Correction, Conformal Mapping, Document Processing
Ruigang Yang, Lin Yun, Mingxuan Sun, Michael S. Brown, W. Brent Seales, "Restoring 2D Content from Distorted Documents", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 29, no. , pp. 1904-1916, November 2007, doi:10.1109/TPAMI.2007.1118