Back to Main Conference 2026
LREC 2026main

Breaking the Benchmark: Revealing LLM Bias via Minimal Contextual Augmentation

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/5a6nbh2tnoeb

Abstract

Large Language Models have been shown to demonstrate stereotypical biases in their representations and behavior due to the discriminative nature of the data that they have been trained on. Despite significant progress in the development of methods and models that refrain from using stereotypical information in their decision-making, recent work has shown that approaches used for bias alignment are brittle. In this work, we introduce a novel and general augmentation framework that involves three plug-and-play steps and is applicable to a number of fairness evaluation benchmarks. Through application of augmentation to a fairness evaluation dataset (Bias Benchmark for Question Answering (BBQ)), we find that Large Language Models (LLMs), including state-of-the-art open and closed weight models, are susceptible to perturbations to their inputs, showcasing a higher likelihood to behave stereotypically. Furthermore, we find that such models are more likely to have biased behavior in cases where the target demographic belongs to a community less studied by the literature, underlining the need to expand the fairness and safety research to include more diverse communities.

Details

Paper ID
lrec2026-main-322
Pages
pp. 4070-4092
BibKey
miandoab-etal-2026-breaking
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • KM

    Kaveh Eskandari Miandoab

  • MK

    Mahammed Kamruzzaman

  • AG

    Arshia Gharooni

  • GK

    Gene Louis Kim

  • VS

    Vasanth Sarathy

  • NM

    Ninareh Mehrabi

Links