Investigating How LLMs Propagate Female Stereotypes: Comparing What Models Say via Prompts with What They Represent in Their Embeddings
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
As Large Language Models (LLMs) are increasingly deployed in sensitive domains, concerns about their encoding and reproduction of social bias have intensified. We examine how gender stereotypes are represented in embeddings and expressed in outputs across three models: BERT, base LLaMA-2-7b, and instruction-tuned LLaMA-2-7b-Chat. Focusing on seven female-oriented stereotype categories, we compare embedding-level bias using Directional Embedding Probing with output-level behavior measured via masked token prediction (BERT) and narrative prompt completions (LLaMA models). LLaMA-2-Chat showed the strongest representational–behavioral alignment, with female-aligned scores ranging from 60% to 100% and a significant point-biserial correlation (r = 0.55, p = 0.0008). BERT exhibited weaker alignment (0%–60%; r = 0.39, p = 0.054), while base LLaMA-2 showed intermediate but inconsistent patterns. These findings suggest that instruction tuning is associated with clearer alignment between internal representations and generated outputs, while prompt design plays a critical role in surfacing latent bias. The study contributes to fairness research by emphasizing the need to assess both internal representations and their behavioral expression in LLMs.