HomeLREC 2020WorkshopsCALCSlrec2020-ws-calcs-3
Back to CALCS 2020
LREC 2020workshop

When is Multi-task Learning Beneficial for Low-Resource Noisy Code-switched User-generated Algerian Texts?

Proceedings of the 4th Workshop on Computational Approaches to Code Switching

DOI:10.63317/4dmcetb7p5kr

Abstract

We investigate when is it beneficial to simultaneously learn representations for several tasks, in low-resource settings. For this, we work with noisy user-generated texts in Algerian, a low-resource non-standardised Arabic variety. That is, to mitigate the problem of the data scarcity, we experiment with jointly learning progressively 4 tasks, namely code-switch detection, named entity recognition, spell normalisation and correction, and identifying users’ sentiments. The selection of these tasks is motivated by the lack of labelled data for automatic morpho-syntactic or semantic sequence-tagging tasks for Algerian, in contrast to the case of much multi-task learning for NLP. Our empirical results show that multi-task learning is beneficial for some tasks in particular settings, and that the effect of each task on another, the order of the tasks, and the size of the training data of the task with more data do matter. Moreover, the data augmentation that we performed with no external resources has been shown to be beneficial for certain tasks.

Details

Paper ID
lrec2020-ws-calcs-3
Pages
pp. 17-25
BibKey
adouane-bernardy-2020-multi
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
N/A
ISBN
N/A
Workshop
Proceedings of the 4th Workshop on Computational Approaches to Code Switching
Location
undefined, undefined
Date
11 May 2020 16 May 2020

Authors

  • WA

    Wafia Adouane

  • JB

    Jean-Philippe Bernardy

Links