Score: 1

SpidR-Adapt: A Universal Speech Representation Model for Few-Shot Adaptation

Published: December 24, 2025 | arXiv ID: 2512.21204v1

By: Mahi Luthra , Jiayi Shen , Maxime Poli and more

BigTech Affiliations: Meta

Potential Business Impact:

Teaches computers new languages with very little talking.

Business Areas:
Semantic Web Internet Services

Human infants, with only a few hundred hours of speech exposure, acquire basic units of new languages, highlighting a striking efficiency gap compared to the data-hungry self-supervised speech models. To address this gap, this paper introduces SpidR-Adapt for rapid adaptation to new languages using minimal unlabeled data. We cast such low-resource speech representation learning as a meta-learning problem and construct a multi-task adaptive pre-training (MAdaPT) protocol which formulates the adaptation process as a bi-level optimization framework. To enable scalable meta-training under this framework, we propose a novel heuristic solution, first-order bi-level optimization (FOBLO), avoiding heavy computation costs. Finally, we stabilize meta-training by using a robust initialization through interleaved supervision which alternates self-supervised and supervised objectives. Empirically, SpidR-Adapt achieves rapid gains in phonemic discriminability (ABX) and spoken language modeling (sWUGGY, sBLIMP, tSC), improving over in-domain language models after training on less than 1h of target-language audio, over $100\times$ more data-efficient than standard training. These findings highlight a practical, architecture-agnostic path toward biologically inspired, data-efficient representations. We open-source the training code and model checkpoints at https://github.com/facebookresearch/spidr-adapt.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Computation and Language