Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
By: Michele Gubian , Ioana Krehan , Oli Liu and more
Potential Business Impact:
Computer models learn languages better, even new ones.
Analyses of self-supervised speech models have begun to reveal where and how they represent different types of information. However, almost all analyses have focused on English. Here, we examine how wav2vec2 models trained on four different languages encode both language-matched and non-matched speech. We use probing classifiers and geometric analyses to examine how phones, lexical tones, and speaker information are represented. We show that for all pretraining and test languages, the subspaces encoding phones, tones, and speakers are largely orthogonal, and that layerwise patterns of probing accuracy are similar, with a relatively small advantage for matched-language phone and tone (but not speaker) probes in the later layers. Our findings suggest that the structure of representations learned by wav2vec2 is largely independent of the speech material used during pretraining.
Similar Papers
What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training
Computation and Language
Teaches computers to understand Dutch speech better.
On the Cross-lingual Transferability of Pre-trained wav2vec2-based Models
Computation and Language
Makes computers understand many languages better.
Self-supervised learning of speech representations with Dutch archival data
Sound
Teaches computers to understand Dutch speech better.