Back to Main Conference 2014
LREC 2014main

Crowdsourcing as a preprocessing for complex semantic annotation tasks

Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014)

DOI:10.63317/2ia7ak882jcf

Abstract

This article outlines a methodology that uses crowdsourcing to reduce the workload of experts for complex semantic tasks. We split turker-annotated datasets into a high-agreement block, which is not modified, and a low-agreement block, which is re-annotated by experts. The resulting annotations have higher observed agreement. We identify different biases in the annotation for both turkers and experts.

Details

Paper ID
lrec2014-main-399
Pages
pp. 229-234
BibKey
alonso-romeo-2014-crowdsourcing
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-9517408-8-4
Conference
Ninth International Conference on Language Resources and Evaluation
Location
Reykjavik, Iceland
Date
26 May 2014 31 May 2014

Authors

  • HA

    Héctor Martínez Alonso

  • LR

    Lauren Romeo

Links