Back to Main Conference 2014
LREC 2014main

Human annotation of ASR error regions: Is “gravity” a sharable concept for human annotators?

Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014)

DOI:10.63317/2qoyid3337gv

Abstract

This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the “seriousness” of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not.

Details

Paper ID
lrec2014-main-601
Pages
pp. 3050-3056
BibKey
luzzati-etal-2014-human
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-9517408-8-4
Conference
Ninth International Conference on Language Resources and Evaluation
Location
Reykjavik, Iceland
Date
26 May 2014 31 May 2014

Authors

  • DL

    Daniel Luzzati

  • CG

    Cyril Grouin

  • IV

    Ioana Vasilescu

  • MA

    Martine Adda-Decker

  • EB

    Eric Bilinski

  • NC

    Nathalie Camelin

  • JK

    Juliette Kahn

  • CL

    Carole Lailler

  • LL

    Lori Lamel

  • SR

    Sophie Rosset

Links