Efficient Adaptation of English Language Models for Morphologically Rich and Underrepresented Languages: The Case of Arabic
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Transformer-based language models have revolutionized NLP, yet their adaptation to morphologically rich and dialectally diverse languages such as Arabic remains non-trivial. We introduce ModernAraBERT, a resource-efficient adaptation of the English-pretrained ModernBERT for Arabic, employing continued pretraining on large Arabic corpora followed by lightweight head-only fine-tuning with a frozen encoder. This strategy retains cross-lingual knowledge while capturing Arabic morphology and orthographic variation, offering a scalable alternative to training monolingual models from scratch. We evaluate ModernAraBERT on three representative Arabic NLP tasks, sentiment analysis, named entity recognition, and extractive question answering, against strong Arabic-specific and multilingual baselines (AraBERTv1, AraBERTv2, MARBERT, mBERT). Across all tasks, ModernAraBERT achieves consistent and often substantial improvements, particularly for sentence and token-level understanding, demonstrating that modern English encoder architectures can be efficiently transferred to Arabic through language-adaptive pretraining. Beyond Arabic, our findings highlight a generalizable paradigm for extending state-of-the-art models to morphologically complex and underrepresented languages with reduced computational overhead.