Title

How Does Automatic Machine Translation Evaluation Correlate With Human Scoring as the Number of Reference Translations Increases?

Author(s)

Andrew Finch, Yashuiro Akiba, Eiichiro Sumita

ATR - Spoken Language Translation Reseach Laboratories, 2-2-2 Hikaridai "Keihanna Science City", Kyoto, 619-0288, Japan

Session

P25-EW

Abstract

Automatic machine translation evaluation is a very difficult task due to the wide diversity of valid output translations that may result from translating a single source sentence or textual segment. Recently a number of competing methods of automatic machine translation evaluation have been adopted by the research community, of these the some of the most utilized are BLEU, NIST, mWER and the F-measure. This work extends the work of others in the field looking at how closely these evaluation techniques match human performance at ranking the translation output. However, we focus on investigating how these systems scale up with increasing numbers of human-produced references. We measure the correlation of the automatic ranking of the output from nine different machine translation systems, with the ranking derived from the score assigned by nine human evaluators using up to sixteen references per sentence. Our results show that evaluation performance improves with increasing numbers of references for all of the scoring methods except NIST which only shows improvements with small numbers of references.

Keyword(s)

BLEU NIST mWER F-Measure Machine Translation Evaluation SMT TDMT

Language(s) English, Japanese
Full Paper

277.pdf