Back to Main Conference 2022
LREC 2022main

BERTrade: Using Contextual Embeddings to Parse Old French

Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2022)

DOI:10.63317/3kwhqmw7oh9s

Abstract

The successes of contextual word embeddings learned by training large-scale language models, while remarkable, have mostly occurred for languages where significant amounts of raw texts are available and where annotated data in downstream tasks have a relatively regular spelling. Conversely, it is not yet completely clear if these models are also well suited for lesser-resourced and more irregular languages. We study the case of Old French, which is in the interesting position of having relatively limited amount of available raw text, but enough annotated resources to assess the relevance of contextual word embedding models for downstream NLP tasks. In particular, we use POS-tagging and dependency parsing to evaluate the quality of such models in a large array of configurations, including models trained from scratch from small amounts of raw text and models pre-trained on other languages but fine-tuned on Medieval French data.

Details

Paper ID
lrec2022-main-119
Pages
pp. 1104-1113
BibKey
grobol-etal-2022-bertrade
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-38-2
Conference
Thirteenth Language Resources and Evaluation Conference
Location
Marseille, France
Date
20 June 2022 25 June 2022

Authors

  • LG

    Loïc Grobol

  • MR

    Mathilde Regnault

  • PO

    Pedro Ortiz Suarez

  • BS

    Benoît Sagot

  • LR

    Laurent Romary

  • BC

    Benoit Crabbé

Links