Back to Main Conference 2026
LREC 2026main

Why So Separate: Analyzing In-Context Learning from a Vector Space Perspective

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/2o2ek4komzqz

Abstract

In-context learning (ICL) is a popular prompting strategy for large language models. ICL allows models to learn tasks using demonstrative examples alone, without any weight updates or training. Nevertheless, it is still largely unclear why ICL works. In this paper, we investigate ICL from a new viewpoint, namely a vector space perspective, and extract insights for ICL from this analysis. In our experiments, we extract the hidden representations, i.e., embeddings, created by a large language model when passing an ICL prompt through it. We find that these embeddings generated by large language models are separable in the vector space when applying ICL. The degree of separability is dependent on the difficulty of the task, the size of the model and other factors, like the labels of demonstrative examples. We also find that, especially for large models, the separability is indicative of the classification performance. As an application, we utilize our findings to explain peculiarities of ICL and to select demonstrative examples for ICL. Experiments across multiple datasets show that this way of selecting examples consistently outperforms the commonly used random selection method.

Details

Paper ID
lrec2026-main-007
Pages
pp. 93-106
BibKey
kalmbach-etal-2026-why
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • TK

    Tobias Kalmbach

  • SS

    Sandipan Sikdar

Links