How Far Can Bias Go? Tracing Bias from Pre-Training Data to Alignment
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
As LLMs are increasingly integrated into user-facing applications, addressing biases that perpetuate societal inequalities is crucial. While much work has gone into measuring and mitigating biases, fewer studies have investigated their origins. Therefore, this study examines the propagation of representational gender-occupation bias from pre-training data to LLM generations. Using zero-shot prompting and token co-occurrence analyses, we explore how biases in the pre-training data influence model generations. Our findings reveal that representational biases present in the pre-training data are amplified in the model generations, regardless of hyperparameters and prompting type. By comparing gender representation in the pre-training data with real-world distributions, our research highlights discrepancies between the data and the model, underscoring the importance of further work in mitigating bias at the data level.