The Community for Technology Leaders
2009 Ninth IEEE International Conference on Data Mining (2009)
Miami, Florida
Dec. 6, 2009 to Dec. 9, 2009
ISSN: 1550-4786
ISBN: 978-0-7695-3895-2
pp: 428-437
Typical information extraction (IE) systems can be seen as tasks assigning labels to words in a natural language sequence. The performance is restricted by the availability of labeled words. To tackle this issue, we propose a semi-supervised approach to improve the sequence labeling procedure in IE through a class of algorithms with {\em self-learned features} (SLF). A supervised classifier can be trained with annotated text sequences and used to classify each word in a large set of unannotated sentences. By averaging predicted labels over all cases in the unlabeled corpus, SLF training builds class label distribution patterns for each word (or word attribute) in the dictionary and re-trains the current model iteratively adding these distributions as extra word {\em features}. Basic SLF models how likely a word could be assigned to target class types. Several extensions are proposed, such as learning words' class boundary distributions. SLF exhibits robust and scalable behaviour and is easy to tune. We applied this approach on four classical IE tasks: named entity recognition (German and English), part-of-speech tagging (English) and one gene name recognition corpus. Experimental results show effective improvements over the supervised baselines on all tasks. In addition, when compared with the closely related self-training idea, this approach shows favorable advantages.
semi-supervised learning, semi-supervised feature learning, structural output learning, sequence labeling, self-learned features, information extraction

P. Kuksa, R. Collobert, J. Weston, K. Sadamasa, Y. Qi and K. Kavukcuoglu, "Semi-Supervised Sequence Labeling with Self-Learned Features," 2009 Ninth IEEE International Conference on Data Mining(ICDM), Miami, Florida, 2009, pp. 428-437.
94 ms
(Ver 3.3 (11022016))