Back to Main Conference 2022
LREC 2022main

Re-train or Train from Scratch? Comparing Pre-training Strategies of BERT in the Medical Domain

Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2022)

DOI:10.63317/237kcd4okhww

Abstract

BERT models used in specialized domains all seem to be the result of a simple strategy: initializing with the original BERT and then resuming pre-training on a specialized corpus. This method yields rather good performance (e.g. BioBERT (Lee et al., 2020), SciBERT (Beltagy et al., 2019), BlueBERT (Peng et al., 2019)). However, it seems reasonable to think that training directly on a specialized corpus, using a specialized vocabulary, could result in more tailored embeddings and thus help performance. To test this hypothesis, we train BERT models from scratch using many configurations involving general and medical corpora. Based on evaluations using four different tasks, we find that the initial corpus only has a weak influence on the performance of BERT models when these are further pre-trained on a medical corpus.

Details

Paper ID
lrec2022-main-281
Pages
pp. 2626-2633
BibKey
el-boukkouri-etal-2022-train
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-38-2
Conference
Thirteenth Language Resources and Evaluation Conference
Location
Marseille, France
Date
20 June 2022 25 June 2022

Authors

  • HE

    Hicham El Boukkouri

  • OF

    Olivier Ferret

  • TL

    Thomas Lavergne

  • PZ

    Pierre Zweigenbaum

Links