SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision
By: Maxime Poli , Mahi Luthra , Youssef Benchekroun and more
Potential Business Impact:
Teaches computers to understand speech without words.
The parallel advances in language modeling and speech representation learning have raised the prospect of learning language directly from speech without textual intermediates. This requires extracting semantic representations directly from speech. Our contributions are threefold. First, we introduce SpidR, a self-supervised speech representation model that efficiently learns representations with highly accessible phonetic information, which makes it particularly suited for textless spoken language modeling. It is trained on raw waveforms using a masked prediction objective combined with self-distillation and online clustering. The intermediate layers of the student model learn to predict assignments derived from the teacher's intermediate layers. This learning objective stabilizes the online clustering procedure compared to previous approaches, resulting in higher quality codebooks. SpidR outperforms wav2vec 2.0, HuBERT, WavLM, and DinoSR on downstream language modeling benchmarks (sWUGGY, sBLIMP, tSC). Second, we systematically evaluate across models and layers the correlation between speech unit quality (ABX, PNMI) and language modeling performance, validating these metrics as reliable proxies. Finally, SpidR significantly reduces pretraining time compared to HuBERT, requiring only one day of pretraining on 16 GPUs, instead of a week. This speedup is enabled by the pretraining method and an efficient codebase, which allows faster iteration and easier experimentation. We open-source the training code and model checkpoints at https://github.com/facebookresearch/spidr.
Similar Papers
SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision
Computation and Language
Teaches computers to understand talking without words.
SpidR-Adapt: A Universal Speech Representation Model for Few-Shot Adaptation
Computation and Language
Teaches computers new languages with very little talking.
WhiSPA: Semantically and Psychologically Aligned Whisper with Self-Supervised Contrastive and Student-Teacher Learning
Audio and Speech Processing
Helps computers understand emotions in spoken words.