Back to Main Conference 2024
LREC-COLING 2024main

Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/4shdjhs9d9ud

Abstract

Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel “Self-Explanation” prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs’ comprehension in complex dialogue tasks.

Details

Paper ID
lrec2024-main-1269
Pages
pp. 14567-14578
BibKey
gao-etal-2024-self
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • HG

    Haoyu Gao

  • TL

    Ting-En Lin

  • HL

    Hangyu Li

  • MY

    Min Yang

  • YW

    Yuchuan Wu

  • WM

    Wentao Ma

  • FH

    Fei Huang

  • YL

    Yongbin Li

Links