Appeal, Align, Divide? Stance Detection for Group-Directed Messages in German Parliamentary Debates
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
This paper presents a new benchmark for detecting group-based appeals, i.e., positive or negative references towards social groups, in German parliamentary debates. In the first step, group mentions are identified as targets for stance detection. In the next step, three human annotators assign stance labels to the group mentions, coding the speaker’s perspective towards the specific group. The created benchmark data is then used to investigate the capacity of Large Language Models (LLMs) for detecting polticians’ stances towards social groups. We explore the potential of different prompting strategies (zero-shot prompting, few-shot prompting, Chain-of-Thought) for this task and compare the results to a supervised BERT baseline, showing that in low-resource scenarios LLMs can outperform smaller fine-tuned models without the need for annotating large datasets.