Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Abstract
Pretrained Language Models (PLMs) are the de facto backbone of most state-of-the-art NLP systems. In this paper, we introduce a family of domain-specific pretrained PLMs for French, focusing on three important domains: transcribed speech, medicine, and law. We use a transformer architecture based on efficient methods (LinFormer) to maximise their utility, since these domains often involve processing long documents. We evaluate and compare our models to state-of-the-art models on a diverse set of tasks and datasets, some of which are introduced in this paper. We gather the datasets into a new French-language evaluation benchmark for these three domains. We also compare various training configurations: continued pretraining, pretraining from scratch, as well as single- and multi-domain pretraining. Extensive domain-specific experiments show that it is possible to attain competitive downstream performance even when pre-training with the approximative LinFormer attention mechanism. For full reproducibility, we release the models and pretraining data, as well as contributed datasets.
Details
Authors
- VS
Vincent Segonne
- AM
Aidan Mannion
- LA
Laura Cristina Alonzo Canul
- AA
Alexandre Daniel Audibert
- XL
Xingyu Liu
- CM
Cécile Macaire
- AP
Adrien Pupier
- YZ
Yongxin Zhou
- MA
Mathilde Aguiar
- FH
Felix E. Herron
- MN
Magali Norré
- MA
Massih R Amini
- PB
Pierrette Bouillon
- IE
Iris Eshkol-Taravella
- EE
Emmanuelle Esperança-Rodier
- TF
Thomas François
- LG
Lorraine Goeuriot
- JG
Jérôme Goulian
- ML
Mathieu Lafourcade
- BL
Benjamin Lecouteux
- FP
François Portet
- FR
Fabien Ringeval
- VV
Vincent Vandeghinste
- MC
Maximin Coavoux
- MD
Marco Dinarelli
- DS
Didier Schwab