Sept. 18, 2011 to Sept. 21, 2011
This paper presents a new method based on Fourier and moments features to extract words and characters from a video text line in any direction for recognition. Unlike existing methods which output the entire text line to the ensuing recognition algorithm, the proposed method obtains each extracted character from the text line as input to the recognition algorithm because the background of a single character is relatively simple compared to the text line and words. Max-Min clustering criterion is introduced to obtain text cluster from the extracted Fourier and moments feature set. Union of the text cluster with Canny operation of the input video text line is proposed to obtain missing text candidates. Then a run length criterion is used for extraction of words. From the words, we propose a new idea for extracting characters from the text candidates of each word image based on the fact that the text height difference at the character boundary column is smaller than that at other columns of the word image. We evaluate the method on a large dataset at three levels namely text line, words and characters in terms of recall, precision and f-measure. In addition to this, we show that the recognition result for the extracted character is better than words and lines. Our experimental set up involves 3527 characters including Chinese. The dataset is selected from TRECVID database of 2005 and 2006.
Video word segmentation, Video character extraction, Fourier-Moments, Run length, Text height difference, Video character recognition
Palaiahnakote Shivakumara, Bolan Su, Deepak Rajendran, Chew Lim Tan, "A New Fourier-Moments Based Video Word and Character Extraction Method for Recognition", ICDAR, 2011, 2013 12th International Conference on Document Analysis and Recognition, 2013 12th International Conference on Document Analysis and Recognition 2011, pp. 1165-1169, doi:10.1109/ICDAR.2011.235