A Teacher-Student Approach to Creating Verified Synthetic Clarification and Correction Dialogues for TableQA Tasks
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Real dialogues with AI assistants for solving table questions-answering tasks often follow dynamic, unpredictable paths due to imperfect information provided by the user or in the data, which must be caught and handled. Developing datasets which capture such user-AI interactions is difficult and time-consuming. In this work, we develop a novel framework for synthetically generating controlled, multi-turn conversations between a user and AI assistant for the task of table-based question answering (TableQA), which can be generated from an existing dataset with fully specified TableQA examples for any target domain. Each conversation aims to solve a table-based reasoning question through collaborative effort, modeling one of two real-world scenarios: (1) an AI-initiated clarification, or (2) a user-initiated correction. Critically, we employ a strong teacher LLM to verify our synthetic conversations by functional correctness, ensuring high quality. Finally, we demonstrate synthetic datasets generated from TableQA tasks as benchmarks of frontier LLMs. We find that even larger models struggle to effectively issue clarification questions and accurately integrate user feedback for corrections, demonstrating important areas for future research.