TDMulti: A Tunisian Dialect-Modern Standard Arabic Multitask Corpus with a Context-Aware Cross-Attention BERT Model
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
The Tunisian dialect dominates online communication in Tunisia but remains severely under-resourced in natural language processing. We introduce the first multitask corpus of Tunisian dialect manually aligned with its equivalents in modern standard Arabic. The TDMulti corpus consists of 3,100 social media comments annotated with 12,400 labels for four interrelated tasks: hate speech detection, sentiment polarity classification, sarcasm identification, and topic category classification. The TDMulti corpus provides a new benchmark for studying pragmatic and social aspects of Tunisian dialect in relation to modern standard Arabic. To exploit this resource, we propose a deep learning model based on transformer architectures. We design three variants: a baseline multitask classifier, a cross-attention model aligning Tunisian dialect and modern standard Arabic representations, and a context-aware cross-attention mechanism with task-specific masking. We evaluate the approach using large pre-trained Arabic language models under different configurations. Results show that the context-aware cross-attention model achieves the best performance, particularly for sarcasm and hate speech detection. TDMulti is released under an open license, contributing a novel resource to advance research on Arabic dialect processing.