Back to Main Conference 2026
LREC 2026main

Conversational Implicatures through the Lens of LLMs

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/5dqc2g73d3do

Abstract

Recent research has explored the capacity of Large Language Models (LLMs) to perform pragmatic reasoning and interpret complex pragmatic phenomena. However, such phenomena are inherently ambiguous, and even human evaluations are highly variable. Many existing studies directly compare human and model responses while assuming a single "correct" interpretation, thereby overlooking the natural variability that characterizes human pragmatic understanding. This raises two key issues: (1) the need for novel evaluation methods that account for interpretive variability and allow for meaningful comparison between humans and models, and (2) the potential limitations of current linguistic theories in capturing the richness of human pragmatic behavior. We propose that LLMs can serve not only as benchmarks for human-model alignment, but also as tools for investigating the nature of pragmatic phenomena and their relationship to linguistic theory. To this end, we developed a handcrafted dataset encompassing eight types of conversational implicatures. Our study addresses three main research questions: (1) Do LLMs process conversational implicatures differently from humans? (2) If so, how do these differences manifest? (3) What do these findings reveal about the cognitive capacities of LLMs and the explanatory adequacy of pragmatic theory?

Details

Paper ID
lrec2026-main-389
Pages
pp. 4955-4966
BibKey
lombardi-etal-2026-conversational
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • AL

    Agnese Lombardi

  • AL

    Alessandro Lenci

Links