Back to Main Conference 2026
LREC 2026main

Cultural and Knowledge Biases in LLMs through the Lens of Entity-Aware Machine Translation

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/3jxgnspt4srr

Abstract

Large Language Models (LLMs) demonstrate strong multilingual capabilities yet exhibit systematic cultural biases that affect entity-aware machine translation. While external knowledge integration improves translation accuracy, the extent of these benefits across varying degrees of cultural specificity remains unexplored. We propose a three-level cultural specificity framework: Culturally Agnostic, Culturally Sensitive, and Culturally Local, to systematically analyze how cultural context affects entity translation difficulty and the utility of external knowledge. Through experiments spanning 11 LLMs and 10 languages, we demonstrate that external knowledge provides substantially greater improvements for culturally local entities (up to 70% in m-ETA) compared to culturally agnostic ones. Our analysis reveals distinct behavioral patterns across model tiers: closed and open-weight models show synergistic improvements in both entity accuracy and overall translation quality, while open-data models struggle with instruction-following despite improved entity accuracy.

Details

Paper ID
lrec2026-main-692
Pages
pp. 8794-8812
BibKey
xu-etal-2026-cultural
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • LX

    Lu Xu

  • LM

    Luca Moroni

  • RN

    Roberto Navigli

Links