Back to Main Conference 2026
LREC 2026main

Semantic Capacity in Language Learners and LLMs: A Case Study of Quantifier Scope

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/43c9u8ugd6m7

Abstract

This study investigates the semantic capacity of large language models (LLMs) through the lens of quantifier scope interpretation. Sentences containing multiple quantifiers often give rise to interpretive ambiguities, and the range of available readings can vary across languages. Adopting a cross-linguistic perspective, we examine how LLMs interpret quantifier scope in English and Chinese, using model-generated probabilities to assess the relative likelihood of competing interpretations. Human similarity (HS) scores were used to quantify the extent to which LLMs emulate human performance across language groups. Results reveal that most LLMs prefer the surface scope interpretations, aligning with human tendencies, while only some differentiate between English and Chinese in the inverse scope preferences, reflecting human-similar patterns. HS scores highlight variability in LLMs’ approximation of human behavior, but their overall potential to align with humans is notable. Linguistic identity, instantiated through monolingual and bilingual personas of English or Chinese, was found to influence LLM behavior. Differences in model architecture, scale, and particularly models’ pre-training data language background, significantly influence how closely LLMs approximate human quantifier scope interpretations.

Details

Paper ID
lrec2026-main-755
Pages
pp. 9602-9617
BibKey
fang-etal-2026-semantic
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • SF

    Shaohua Fang

  • YL

    Yue Li

  • YC

    Yan Cong

Links