Back to Main Conference 2016
LREC 2016main
Manual and Automatic Paraphrases for MT Evaluation
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)
Abstract
Paraphrasing of reference translations has been shown to improve the correlation with human judgements in automatic evaluation of machine translation (MT) outputs. In this work, we present a new dataset for evaluating English-Czech translation based on automatic paraphrases. We compare this dataset with an existing set of manually created paraphrases and find that even automatic paraphrases can improve MT evaluation. We have also propose and evaluate several criteria for selecting suitable reference translations from a larger set.