2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT) (2012)
Nov. 26, 2012 to Nov. 28, 2012
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ACSAT.2012.15
A novel statistical approach is described, enabling the automated extraction of large word lists from unsegmented corpora without reliance on existing dictionaries. The main contribution of this approach includes the following two points: First, it's very generic and has been successfully applied separately to both Chinese and Japanese, Second, it doesn't take any use of punctuation information, so compared to most of the existing methods, it doesn't need to pre-process the corpora to remove the punctuations or to pre-segment the corpora by punctuations. Our experiment results in the extraction of 14,087 Chinese words and 15,553 Japanese words. Precision achieved is over 80% for two-character Chinese words, over 90% for one-character Japanese words and over 70% for two-character Japanese words. And we've also successfully extracted most of single-character words including common functional characters, such in, and, or, 's, also, a family name in Chinese, hiragana such as " ?,"" ?,"" ?" in Japanese, and punctuations such as ",", "", "?".
natural language processing, statistical analysis
S. Ke, S. C. Shiu, B. Goertzel, G. Yu, X. Shi and C. Zhou, "Automated Extraction of Lexicon Applied both to Chinese and Japanese Corpora," 2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT), Kuala Lumpur, 2013, pp. 7-12.