Back to Main Conference 2016
LREC 2016main
Tools and Guidelines for Principled Machine Translation Development
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)
Abstract
This work addresses the need to aid Machine Translation (MT) development cycles with a complete workflow of MT evaluation methods. Our aim is to assess, compare and improve MT system variants. We hereby report on novel tools and practices that support various measures, developed in order to support a principled and informed approach of MT development. Our toolkit for automatic evaluation showcases quick and detailed comparison of MT system variants through automatic metrics and n-gram feedback, along with manual evaluation via edit-distance, error annotation and task-based feedback.