What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training
By: Marianne de Heer Kloots , Hosein Mohebbi , Charlotte Pouw and more
Potential Business Impact:
Teaches computers to understand Dutch speech better.
How language-specific are speech representations learned by self-supervised models? Existing work has shown that a range of linguistic features can be successfully decoded from end-to-end models trained only on speech recordings. However, it's less clear to what extent pre-training on specific languages improves language-specific linguistic information. Here we test the encoding of Dutch phonetic and lexical information in internal representations of self-supervised Wav2Vec2 models. Pre-training exclusively on Dutch improves the representation of Dutch linguistic features as compared to pre-training on similar amounts of English or larger amounts of multilingual data. This language-specific advantage is well-detected by trained clustering or classification probes, and partially observable using zero-shot metrics. Furthermore, the language-specific benefit on linguistic feature encoding aligns with downstream performance on Automatic Speech Recognition.
Similar Papers
Self-supervised learning of speech representations with Dutch archival data
Sound
Teaches computers to understand Dutch speech better.
Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Computation and Language
Computer models learn languages better, even new ones.
On the Cross-lingual Transferability of Pre-trained wav2vec2-based Models
Computation and Language
Makes computers understand many languages better.