Back to Main Conference 2026
LREC 2026main

How Far Can Bias Go? Tracing Bias from Pre-Training Data to Alignment

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/4zeoky6waeng

Abstract

As LLMs are increasingly integrated into user-facing applications, addressing biases that perpetuate societal inequalities is crucial. While much work has gone into measuring and mitigating biases, fewer studies have investigated their origins. Therefore, this study examines the propagation of representational gender-occupation bias from pre-training data to LLM generations. Using zero-shot prompting and token co-occurrence analyses, we explore how biases in the pre-training data influence model generations. Our findings reveal that representational biases present in the pre-training data are amplified in the model generations, regardless of hyperparameters and prompting type. By comparing gender representation in the pre-training data with real-world distributions, our research highlights discrepancies between the data and the model, underscoring the importance of further work in mitigating bias at the data level.

Details

Paper ID
lrec2026-main-315
Pages
pp. 3975-3995
BibKey
thaler-etal-2026-how
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • MT

    Marion Thaler

  • AK

    Abdullatif Köksal

  • AL

    Alina Leidinger

  • AK

    Anna Anna Korhonen

  • HS

    Hinrich Schütze

Links