Back to Main Conference 2024
LREC-COLING 2024main

Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/4da25cphxk8i

Abstract

Large Language Models (LLMs) operating in 0-shot or few-shot settings achieve competitive results in Text Classification tasks. In-Context Learning (ICL) typically achieves better accuracy than the 0-shot setting, but it pays in terms of efficiency, due to the longer input prompt. In this paper, we propose a strategy to make LLMs as efficient as 0-shot text classifiers, while getting comparable or better accuracy than ICL. Our solution targets the low resource setting, i.e., when only 4 examples per class are available. Using a single LLM and few-shot real data we perform a sequence of generation, filtering and Parameter-Efficient Fine-Tuning steps to create a robust and efficient classifier. Experimental results show that our approach leads to competitive results on multiple text classification datasets.

Details

Paper ID
lrec2024-main-0533
Pages
pp. 6017-6023
BibKey
patwa-etal-2024-enhancing
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • PP

    Parth Patwa

  • SF

    Simone Filice

  • ZC

    Zhiyu Chen

  • GC

    Giuseppe Castellucci

  • OR

    Oleg Rokhlenko

  • SM

    Shervin Malmasi

Links