Self-supervised learning of speech representations with Dutch archival data
By: Nik Vaessen, Roeland Ordelman, David A. van Leeuwen
Potential Business Impact:
Teaches computers to understand Dutch speech better.
This paper explores the use of Dutch archival television broadcast data for self-supervised learning of speech foundation models, specifically wav2vec 2.0. We first study data quality assumptions for pre-training, and show how music, noise and speaker overlap affect SSL convergence and downstream fine-tuning performance. Secondly, we explore effectively pre-processing strategies to convert the noisy broadcast dataset into a qualitative dataset for pre-training, by using Whisper and WhisperX. Thirdly, we compare mono-lingual and multi-lingual pre-training with equivalent amounts of data, and show that mono-lingual pre-training is more robust to out-of-domain data. Lastly, we achieve a state-of-the-art LARGE wav2vec 2.0 model for the Dutch language, by a continuation of pre-training a wav2vec 2.0 XLS-R model checkpoint with our 55k hour archival dataset.
Similar Papers
What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training
Computation and Language
Teaches computers to understand Dutch speech better.
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
Computation and Language
Helps computers understand many languages better.
Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Computation and Language
Computer models learn languages better, even new ones.