Back to Main Conference 2018
LREC 2018main

Increasing Argument Annotation Reproducibility by Using Inter-annotator Agreement to Improve Guidelines

Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

DOI:10.63317/5fpa74ypio28

Abstract

In this abstract we present a methodology to improve Argument annotation guidelines by exploiting inter-annotator agreement measures. After a first stage of the annotation effort, we have detected problematic issues via an analysis of inter-annotator agreement. We have detected ill-defined concepts, which we have addressed by redefining high-level annotation goals. For other concepts, that are well-delimited but complex, the annotation protocol has been extended and detailed. Moreover, as can be expected, we show that distinctions where human annotators have less agreement are also those where automatic analyzers perform worse. Thus, the reproducibility of results of Argument Mining systems can be addressed by improving inter-annotator agreement in the training material. Following this methodology, we are enhancing a corpus annotated with argumentation, available at https://github.com/PLN-FaMAF/ArgumentMiningECHR together with guidelines and analyses of agreement. These analyses can be used to filter performance figures of automated systems, with lower penalties for cases where human annotators agree less.

Details

Paper ID
lrec2018-main-640
Pages
N/A
BibKey
teruel-etal-2018-increasing
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-00-9
Conference
Eleventh International Conference on Language Resources and Evaluation
Location
Miyazaki, Japan
Date
7 May 2018 12 May 2018

Authors

  • MT

    Milagro Teruel

  • CC

    Cristian Cardellino

  • FC

    Fernando Cardellino

  • LA

    Laura Alonso Alemany

  • SV

    Serena Villata

Links