Cultural and Knowledge Biases in LLMs through the Lens of Entity-Aware Machine Translation
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Large Language Models (LLMs) demonstrate strong multilingual capabilities yet exhibit systematic cultural biases that affect entity-aware machine translation. While external knowledge integration improves translation accuracy, the extent of these benefits across varying degrees of cultural specificity remains unexplored. We propose a three-level cultural specificity framework: Culturally Agnostic, Culturally Sensitive, and Culturally Local, to systematically analyze how cultural context affects entity translation difficulty and the utility of external knowledge. Through experiments spanning 11 LLMs and 10 languages, we demonstrate that external knowledge provides substantially greater improvements for culturally local entities (up to 70% in m-ETA) compared to culturally agnostic ones. Our analysis reveals distinct behavioral patterns across model tiers: closed and open-weight models show synergistic improvements in both entity accuracy and overall translation quality, while open-data models struggle with instruction-following despite improved entity accuracy.