Back to Main Conference 2024
LREC-COLING 2024main

Pre-Trained Language Models Represent Some Geographic Populations Better than Others

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/2tgb9p6gmi4e

Abstract

This paper measures the skew in how well two families of LLMs represent diverse geographic populations. A spatial probing task is used with geo-referenced corpora to measure the degree to which pre-trained language models from the OPT and BLOOM series represent diverse populations around the world. Results show that these models perform much better for some populations than others. In particular, populations across the US and the UK are represented quite well while those in South and Southeast Asia are poorly represented. Analysis shows that both families of models largely share the same skew across populations. At the same time, this skew cannot be fully explained by sociolinguistic factors, economic factors, or geographic factors. The basic conclusion from this analysis is that pre-trained models do not equally represent the world’s population: there is a strong skew towards specific geographic populations. This finding challenges the idea that a single model can be used for all populations.

Details

Paper ID
lrec2024-main-1135
Pages
pp. 12966-12976
BibKey
dunn-etal-2024-pre
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • JD

    Jonathan Dunn

  • BA

    Benjamin Adams

  • HT

    Harish Tayyar Madabushi

Links