Summary of the paper

Title LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments
Authors Eric Kow and Anja Belz
Abstract In this paper we describe the LG-Eval toolkit for creating online language evaluation experiments. LG-Eval is the direct result of our work setting up and carrying out the human evaluation experiments in several of the Generation Challenges shared tasks. It provides tools for creating experiments with different kinds of rating tools, allocating items to evaluators, and collecting the evaluation scores.
Topics Natural Language Generation, Tools, systems, applications, Evaluation methodologies
Full paper LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments
Bibtex @InProceedings{KOW12.957,
  author = {Eric Kow and Anja Belz},
  title = {LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments},
  booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
  year = {2012},
  month = {may},
  date = {23-25},
  address = {Istanbul, Turkey},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {978-2-9517408-7-7},
  language = {english}
 }
Powered by ELDA © 2012 ELDA/ELRA