HomeLREC 2020WorkshopsCALCSlrec2020-ws-calcs-7
Back to CALCS 2020
LREC 2020workshop

Semi-supervised acoustic and language model training for English-isiZulu code-switched speech recognition

Proceedings of the 4th Workshop on Computational Approaches to Code Switching

DOI:10.63317/3kppwbvm8xkd

Abstract

We present an analysis of semi-supervised acoustic and language model training for English-isiZulu code-switched (CS) ASR using soap opera speech. Approximately 11 hours of untranscribed multilingual speech was transcribed automatically using four bilingual CS transcription systems operating in English-isiZulu, English-isiXhosa, English-Setswana and English-Sesotho. These transcriptions were incorporated into the acoustic and language model training sets. Results showed that the TDNN-F acoustic models benefit from the additional semi-supervised data and that even better performance could be achieved by including additional CNN layers. Using these CNN-TDNN-F acoustic models, a first iteration of semi-supervised training achieved an absolute mixed-language WER reduction of 3.44%, and a further 2.18% after a second iteration. Although the languages in the untranscribed data were unknown, the best results were obtained when all automatically transcribed data was used for training and not just the utterances classified as English-isiZulu. Despite perplexity improvements, the semi-supervised language model was not able to improve the ASR performance.

Details

Paper ID
lrec2020-ws-calcs-7
Pages
pp. 52-56
BibKey
biswas-etal-2020-semi
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
N/A
ISBN
N/A
Workshop
Proceedings of the 4th Workshop on Computational Approaches to Code Switching
Location
undefined, undefined
Date
11 May 2020 16 May 2020

Authors

  • AB

    Astik Biswas

  • FD

    Febe De Wet

  • EV

    Ewald Van der westhuizen

  • TN

    Thomas Niesler

Links