Back to Main Conference 2024
LREC-COLING 2024main

Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/585yucni32u5

Abstract

Out-of-domain (OOD) intent detection aims to examine whether the user’s query falls outside the predefined domain of the system, which is crucial for the proper functioning of task-oriented dialogue (TOD) systems. Previous methods address it by fine-tuning discriminative models. Recently, some studies have been exploring the application of large language models (LLMs) represented by ChatGPT to various downstream tasks, but it is still unclear for their ability on OOD detection task.This paper conducts a comprehensive evaluation of LLMs under various experimental settings, and then outline the strengths and weaknesses of LLMs. We find that LLMs exhibit strong zero-shot and few-shot capabilities, but is still at a disadvantage compared to models fine-tuned with full resource. More deeply, through a series of additional analysis experiments, we discuss and summarize the challenges faced by LLMs and provide guidance for future work including injecting domain knowledge, strengthening knowledge transfer from IND(In-domain) to OOD, and understanding long instructions.

Details

Paper ID
lrec2024-main-0210
Pages
pp. 2354-2364
BibKey
wang-etal-2024-beyond
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • PW

    Pei Wang

  • KH

    Keqing He

  • YW

    Yejie Wang

  • XS

    Xiaoshuai Song

  • YM

    Yutao Mou

  • JW

    Jingang Wang

  • YX

    Yunsen Xian

  • XC

    Xunliang Cai

  • WX

    Weiran Xu

Links