Back to Main Conference 2018
LREC 2018main

Improving Machine Translation of Educational Content via Crowdsourcing

Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

DOI:10.63317/4oexxebwbmz6

Abstract

The limited availability of in-domain training data is a major issue in the training of application-specific neural machine translation models. Professional outsourcing of bilingual data collections is costly and often not feasible. In this paper we analyze the influence of using crowdsourcing as a scalable way to obtain translations of target in-domain data having in mind that the translations can be of a lower quality. We apply crowdsourcing with carefully designed quality controls to create parallel corpora for the educational domain by collecting translations of texts from MOOCs from English to eleven languages, which we then use to fine-tune neural machine translation models previously trained on general-domain data. The results from our research indicate that crowdsourced data collected with proper quality controls consistently yields performance gains over general-domain baseline systems, and systems fine-tuned with pre-existing in-domain corpora.

Details

Paper ID
lrec2018-main-528
Pages
N/A
BibKey
behnke-etal-2018-improving
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-00-9
Conference
Eleventh International Conference on Language Resources and Evaluation
Location
Miyazaki, Japan
Date
7 May 2018 12 May 2018

Authors

  • MB

    Maximiliana Behnke

  • AM

    Antonio Valerio Miceli Barone

  • RS

    Rico Sennrich

  • VS

    Vilelmini Sosoni

  • TN

    Thanasis Naskos

  • ET

    Eirini Takoulidou

  • MS

    Maria Stasimioti

  • Mv

    Menno van Zaanen

  • SC

    Sheila Castilho

  • FG

    Federico Gaspari

  • PG

    Panayota Georgakopoulou

  • VK

    Valia Kordoni

  • ME

    Markus Egg

  • KK

    Katia Lida Kermanidis

Links