Back to Main Conference 2014
LREC 2014main
Crowdsourcing as a preprocessing for complex semantic annotation tasks
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014)
Abstract
This article outlines a methodology that uses crowdsourcing to reduce the workload of experts for complex semantic tasks. We split turker-annotated datasets into a high-agreement block, which is not modified, and a low-agreement block, which is re-annotated by experts. The resulting annotations have higher observed agreement. We identify different biases in the annotation for both turkers and experts.