Back to Main Conference 2026
LREC 2026main

Prompt-Based Stance Control in German: An Evaluation of LLMs for Experimental Research on Attitude Change

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/2tvnax68shcy

Abstract

How much can Large Language Models (LLMs) influence the attitudes and opinions of their users? Answering this question requires controlled pre/post-treatment experiments, where participants interact with LLMs that consistently adopt a predefined political stance. Such experiments, however, are only possible if LLMs can be reliably steered to hold these stances throughout the interactions. In this work, we evaluate whether state-of-the-art LLMs can be effectively stance-controlled in German, thereby enabling experiments on human–LLM interactions. First, using a corpus of realistic user prompts, we find that LLMs are predominantly neutral, making them infeasible for said experiments. We then show that a prompt-based stance control method can reliably guide models to argue for or against a particular topic. Finally, we analyze confounding factors like topic and stance of the initial user prompts. We find that control is easiest when the target stance aligns with topical priors of the model or a user’s prompt. Further, the models maintain a comparable style across target stances — a key prerequisite for pre/post-treatment experiments. Taken together, our results demonstrate that stance-controlled LLMs are feasible and practically useful for experiments on user attitude change.

Details

Paper ID
lrec2026-main-644
Pages
pp. 8122-8140
BibKey
omiecienski-etal-2026-prompt
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • FO

    Florian Omiecienski

  • CS

    Cornelia Sindermann

  • AF

    Agnieszka Falenska

Links