Score: 1

Self-supervised learning of speech representations with Dutch archival data

Published: July 6, 2025 | arXiv ID: 2507.04554v2

By: Nik Vaessen, Roeland Ordelman, David A. van Leeuwen

Potential Business Impact:

Teaches computers to understand Dutch speech better.

Business Areas:
Speech Recognition Data and Analytics, Software

This paper explores the use of Dutch archival television broadcast data for self-supervised learning of speech foundation models, specifically wav2vec 2.0. We first study data quality assumptions for pre-training, and show how music, noise and speaker overlap affect SSL convergence and downstream fine-tuning performance. Secondly, we explore effectively pre-processing strategies to convert the noisy broadcast dataset into a qualitative dataset for pre-training, by using Whisper and WhisperX. Thirdly, we compare mono-lingual and multi-lingual pre-training with equivalent amounts of data, and show that mono-lingual pre-training is more robust to out-of-domain data. Lastly, we achieve a state-of-the-art LARGE wav2vec 2.0 model for the Dutch language, by a continuation of pre-training a wav2vec 2.0 XLS-R model checkpoint with our 55k hour archival dataset.

Country of Origin
🇳🇱 Netherlands

Page Count
5 pages

Category
Computer Science:
Sound