Cross-Modal Knowledge Distillation with Multi-Level Data Augmentation for Low-Resource Audio-Visual Sound Event Localization and Detection
By: Qing Wang , Ya Jiang , Hang Chen and more
Potential Business Impact:
Helps computers find sounds in videos better.
This work presents a cross-modal knowledge distillation (CMKD) framework combined with multi-level data augmentation for low-resource audio-visual (AV) sound event localization and detection (SELD). An audio-only SELD model acts as the teacher, transferring knowledge to an AV student model through both output responses and intermediate feature representations. To enhance learning, data augmentation is applied by mixing features randomly selected from multiple network layers and associated loss functions tailored to the SELD task. Extensive experiments on the DCASE 2023 and 2024 SELD datasets show that the proposed method significantly improves AV SELD performance, yielding relative gains of 22%~36% in the overall metric over the baseline. Notably, our approach achieves results comparable to or better than teacher models trained on much larger datasets, surpassing state-of-the-art methods on both DCASE 2023 and 2024 SELD tasks.
Similar Papers
Integrating Spatial and Semantic Embeddings for Stereo Sound Event Localization in Videos
Audio and Speech Processing
Helps computers understand sounds and sights together.
AMMKD: Adaptive Multimodal Multi-teacher Distillation for Lightweight Vision-Language Models
CV and Pattern Recognition
Makes phone apps understand pictures and words better.
Asymmetric Cross-Modal Knowledge Distillation: Bridging Modalities with Weak Semantic Consistency
CV and Pattern Recognition
Teaches computers to learn from different kinds of pictures.