Back to Main Conference 2000
LREC 2000main
Evaluating Translation Quality as Input to Product Development
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000)
Abstract
In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.