Score: 0

Cross-Modal Knowledge Distillation with Multi-Level Data Augmentation for Low-Resource Audio-Visual Sound Event Localization and Detection

Published: August 17, 2025 | arXiv ID: 2508.12334v1

By: Qing Wang , Ya Jiang , Hang Chen and more

Potential Business Impact:

Helps computers find sounds in videos better.

This work presents a cross-modal knowledge distillation (CMKD) framework combined with multi-level data augmentation for low-resource audio-visual (AV) sound event localization and detection (SELD). An audio-only SELD model acts as the teacher, transferring knowledge to an AV student model through both output responses and intermediate feature representations. To enhance learning, data augmentation is applied by mixing features randomly selected from multiple network layers and associated loss functions tailored to the SELD task. Extensive experiments on the DCASE 2023 and 2024 SELD datasets show that the proposed method significantly improves AV SELD performance, yielding relative gains of 22%~36% in the overall metric over the baseline. Notably, our approach achieves results comparable to or better than teacher models trained on much larger datasets, surpassing state-of-the-art methods on both DCASE 2023 and 2024 SELD tasks.

Country of Origin
🇨🇳 China

Page Count
34 pages

Category
Computer Science:
Sound