MauBERT: Universal Phonetic Inductive Biases for Few-Shot Acoustic Units Discovery
By: Angelo Ortiz Tandazo , Manel Khentout , Youssef Benchekroun and more
Potential Business Impact:
Helps computers understand many languages' sounds.
This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models.
Similar Papers
BabyHuBERT: Multilingual Self-Supervised Learning for Segmenting Speakers in Child-Centered Long-Form Recordings
Audio and Speech Processing
Helps computers understand babies talking better.
Scaling HuBERT for African Languages: From Base to Large and XL
Computation and Language
Makes computers understand many African languages better.
MT-HuBERT: Self-Supervised Mix-Training for Few-Shot Keyword Spotting in Mixed Speech
Sound
Helps voice assistants hear many words at once.