2011 IEEE Fifth International Conference on Semantic Computing (2011)
Palo Alto, California USA
Sept. 18, 2011 to Sept. 21, 2011
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICSC.2011.36
Machine Translation (MT) systems are evaluated and debugged using the BLEU automated metric. However, the current community implementation of BLEU is not ideal for MT system developers and researchers since it only produces textual information. I present a novel tool called iBLEU that organizes BLEU scoring information in a visual and easy-to-understand manner, making it easier for MT system developers & researchers to quickly locate documents and sentences on which their system performs poorly. It also allows comparing translations from two different MT systems. Furthermore, one can also choose to compare to the publicly available MT systems, e.g., Google Translate and Bing Translator, with a single click. It can run on all major platforms and requires no setup whatsoever.
machine translation, information visualization
N. Madnani, "iBLEU: Interactively Debugging and Scoring Statistical Machine Translation Systems," 2011 IEEE Fifth International Conference on Semantic Computing(ICSC), Palo Alto, California USA, 2011, pp. 213-214.