Back to Main Conference 2024
LREC-COLING 2024main

Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/3j8jpeibcdrt

Abstract

Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks. Existing multi-modal pre-training methods for the ASR task have primarily focused on single-stage pre-training where a single unsupervised task is used for pre-training followed by fine-tuning on the downstream task. In this work, we introduce a novel method combining multi-modal and multi-task unsupervised pre-training with a translation-based supervised mid-training approach. We empirically demonstrate that such a multi-stage approach leads to relative word error rate (WER) improvements of up to 38.45% over baselines on both Librispeech and SUPERB. Additionally, we share several important findings for choosing pre-training methods and datasets.

Details

Paper ID
lrec2024-main-1045
Pages
pp. 11969-11980
BibKey
jain-etal-2024-multi
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • YJ

    Yash Jain

  • DC

    David M. Chan

  • PD

    Pranav Dheram

  • AK

    Aparna Khare

  • OS

    Olabanji Shonibare

  • VR

    Venkatesh Ravichandran

  • SG

    Shalini Ghosh

Links