Back to Main Conference 2018
LREC 2018main

Grounding Gradable Adjectives through Crowdsourcing

Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

DOI:10.63317/5ii9w43m49ix

Abstract

In order to build technology that has the ability to answer questions relevant to national and global security, e.g., on food insecurity in certain parts of the world, one has to implement machine reading technology that extracts causal mechanisms from texts. Unfortunately, many of these texts describe these interactions using vague, high-level language. One particular example is the use of gradable adjectives, i.e., adjectives that can take a range of magnitudes such as small or slight. Here we propose a method for estimating specific concrete groundings for a set of such gradable adjectives. We use crowdsourcing to gather human language intuitions about the impact of each adjective, then fit a linear mixed effects model to this data. The resulting model is able to estimate the impact of novel instances of these adjectives found in text. We evaluate our model in terms of its ability to generalize to unseen data and find that it has a predictive R2 of 0.632 in general, and 0.677 on a subset of high-frequency adjectives.

Details

Paper ID
lrec2018-main-529
Pages
N/A
BibKey
sharp-etal-2018-grounding
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-00-9
Conference
Eleventh International Conference on Language Resources and Evaluation
Location
Miyazaki, Japan
Date
7 May 2018 12 May 2018

Authors

  • RS

    Rebecca Sharp

  • MP

    Mithun Paul

  • AN

    Ajay Nagesh

  • DB

    Dane Bell

  • MS

    Mihai Surdeanu

Links