Back to Main Conference 2012
LREC 2012main

Involving Language Professionals in the Evaluation of Machine Translation

Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)

DOI:10.63317/52bzkdit52pm

Abstract

Significant breakthroughs in machine translation only seem possible if human translators are taken into the loop. While automatic evaluation and scoring mechanisms such as BLEU have enabled the fast development of systems, it is not clear how systems can meet real-world (quality) requirements in industrial translation scenarios today. The taraXÜ project paves the way for wide usage of hybrid machine translation outputs through various feedback loops in system development. In a consortium of research and industry partners, the project integrates human translators into the development process for rating and post-editing of machine translation outputs thus collecting feedback for possible improvements.

Details

Paper ID
lrec2012-main-129
Pages
pp. 1127-1130
BibKey
avramidis-etal-2012-involving
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-9517408-7-7
Conference
Eighth International Conference on Language Resources and Evaluation
Location
Istanbul, Turkey
Date
21 May 2012 27 May 2012

Authors

  • EA

    Eleftherios Avramidis

  • AB

    Aljoscha Burchardt

  • CF

    Christian Federmann

  • MP

    Maja Popović

  • CT

    Cindy Tscherwinka

  • DV

    David Vilar

Links