Back to Main Conference 2012
LREC 2012main

LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments

Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)

DOI:10.63317/5d6s5fuhd75q

Abstract

In this paper we describe the LG-Eval toolkit for creating online language evaluation experiments. LG-Eval is the direct result of our work setting up and carrying out the human evaluation experiments in several of the Generation Challenges shared tasks. It provides tools for creating experiments with different kinds of rating tools, allocating items to evaluators, and collecting the evaluation scores.

Details

Paper ID
lrec2012-main-570
Pages
pp. 4033-4037
BibKey
kow-belz-2012-lg
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-9517408-7-7
Conference
Eighth International Conference on Language Resources and Evaluation
Location
Istanbul, Turkey
Date
21 May 2012 27 May 2012

Authors

  • EK

    Eric Kow

  • AB

    Anja Belz

Links