Score: 1

SONAR: Self-Distilled Continual Pre-training for Domain Adaptive Audio Representation

Published: September 19, 2025 | arXiv ID: 2509.15703v1

By: Yizhou Zhang , Yuan Gao , Wangjin Zhou and more

Potential Business Impact:

Learns new sounds without forgetting old ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Self-supervised learning (SSL) on large-scale datasets like AudioSet has become the dominant paradigm for audio representation learning. While the continuous influx of new, unlabeled audio presents an opportunity to enrich these static representations, a naive approach is to retrain the model from scratch using all available data. However, this method is computationally prohibitive and discards the valuable knowledge embedded in the previously trained model weights. To address this inefficiency, we propose SONAR (Self-distilled cONtinual pre-training for domain adaptive Audio Representation), a continual pre-training framework built upon BEATs. SONAR effectively adapts to new domains while mitigating catastrophic forgetting by tackling three key challenges: implementing a joint sampling strategy for new and prior data, applying regularization to balance specificity and generality, and dynamically expanding the tokenizer codebook for novel acoustic patterns. Experiments across four distinct domains demonstrate that our method achieves both high adaptability and robust resistance to forgetting.

Country of Origin
🇯🇵 Japan

Page Count
5 pages

Category
Computer Science:
Sound