Decoupled Audio-Visual Dataset Distillation
By: Wenyuan Li , Guang Li , Keisuke Maeda and more
Potential Business Impact:
Makes AI understand sounds and pictures together better.
Audio-Visual Dataset Distillation aims to compress large-scale datasets into compact subsets while preserving the performance of the original data. However, conventional Distribution Matching (DM) methods struggle to capture intrinsic cross-modal alignment. Subsequent studies have attempted to introduce cross-modal matching, but two major challenges remain: (i) independently and randomly initialized encoders lead to inconsistent modality mapping spaces, increasing training difficulty; and (ii) direct interactions between modalities tend to damage modality-specific (private) information, thereby degrading the quality of the distilled data. To address these challenges, we propose DAVDD, a pretraining-based decoupled audio-visual distillation framework. DAVDD leverages a diverse pretrained bank to obtain stable modality features and uses a lightweight decoupler bank to disentangle them into common and private representations. To effectively preserve cross-modal structure, we further introduce Common Intermodal Matching together with a Sample-Distribution Joint Alignment strategy, ensuring that shared representations are aligned both at the sample level and the global distribution level. Meanwhile, private representations are entirely isolated from cross-modal interaction, safeguarding modality-specific cues throughout distillation. Extensive experiments across multiple benchmarks show that DAVDD achieves state-of-the-art results under all IPC settings, demonstrating the effectiveness of decoupled representation learning for high-quality audio-visual dataset distillation. Code will be released.
Similar Papers
CovMatch: Cross-Covariance Guided Multimodal Dataset Distillation with Trainable Text Encoder
CV and Pattern Recognition
Makes AI learn faster with fewer examples.
CoDA: From Text-to-Image Diffusion Models to Training-Free Dataset Distillation
CV and Pattern Recognition
Makes AI learn from less data, faster.
Dynamic-Aware Video Distillation: Optimizing Temporal Resolution Based on Video Semantics
CV and Pattern Recognition
Makes video learning faster by removing extra frames.