Score: 1

MauBERT: Universal Phonetic Inductive Biases for Few-Shot Acoustic Units Discovery

Published: December 22, 2025 | arXiv ID: 2512.19612v1

By: Angelo Ortiz Tandazo , Manel Khentout , Youssef Benchekroun and more

Potential Business Impact:

Helps computers understand many languages' sounds.

Business Areas:
Speech Recognition Data and Analytics, Software

This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models.

Page Count
14 pages

Category
Computer Science:
Computation and Language