Back to Main Conference 2026
LREC 2026main

Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/23hsqksy9475

Abstract

Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these models remain both morally grounded and knowledge-aware. In this work, we uncover a critical limitation of current LLMs—their tendency to prioritize moral reasoning over commonsense understanding. To investigate this phenomenon, we introduce COMORAL, a novel benchmark dataset containing commonsense contradictions embedded within moral dilemmas. Through extensive evaluation of ten LLMs across different model sizes, we find that existing models consistently struggle to identify such contradictions without prior signal. Furthermore, we observe a pervasive narrative focus bias, wherein LLMs more readily detect commonsense contradictions when they are attributed to a secondary character rather than the primary (narrator) character. Our comprehensive analysis underscores the need for enhanced reasoning-aware training to improve the commonsense robustness of large language models.

Details

Paper ID
lrec2026-main-835
Pages
pp. 10653-10663
BibKey
purkayastha-etal-2026-common
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • SP

    Saugata Purkayastha

  • PK

    Pranav Kushare

  • PP

    Pragya Paramita Pal

  • SP

    Sukannya Purkayastha

Links