Los Angeles, CA
March 31, 2009 to April 2, 2009
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CSIE.2009.278
Instead of collecting more and more parallel training corpora, this paper aims to improve SMT performance by exploiting full potential of the existing parallel corpora. Inspired by the mechanism of string subsequence and word sequence kernels, we first propose a cross-lingual word kernel (CWK) SVM to classify SMT training corpus as literal translation and free translation, and then use these data to train SMT models. One experiment indicates that larger training corpus do not always lead to higher decoding performance when the incremental data are not literal translation. And another experiment shows that properly enlarging the contribution of literal translation can improve SMT performance significantly.
Cross-lingual, Word Kernel SVM, SMT
Xiwu Han, "A Cross-Lingual Word Kernel SVM for SMT Training Corpus Selection", CSIE, 2009, 2009 WRI World Congress on Computer Science and Information Engineering, CSIE, 2009 WRI World Congress on Computer Science and Information Engineering, CSIE 2009, pp. 626-630, doi:10.1109/CSIE.2009.278