Prompt-Based Stance Control in German: An Evaluation of LLMs for Experimental Research on Attitude Change
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
How much can Large Language Models (LLMs) influence the attitudes and opinions of their users? Answering this question requires controlled pre/post-treatment experiments, where participants interact with LLMs that consistently adopt a predefined political stance. Such experiments, however, are only possible if LLMs can be reliably steered to hold these stances throughout the interactions. In this work, we evaluate whether state-of-the-art LLMs can be effectively stance-controlled in German, thereby enabling experiments on human–LLM interactions. First, using a corpus of realistic user prompts, we find that LLMs are predominantly neutral, making them infeasible for said experiments. We then show that a prompt-based stance control method can reliably guide models to argue for or against a particular topic. Finally, we analyze confounding factors like topic and stance of the initial user prompts. We find that control is easiest when the target stance aligns with topical priors of the model or a user’s prompt. Further, the models maintain a comparable style across target stances — a key prerequisite for pre/post-treatment experiments. Taken together, our results demonstrate that stance-controlled LLMs are feasible and practically useful for experiments on user attitude change.