CovMatch: Cross-Covariance Guided Multimodal Dataset Distillation with Trainable Text Encoder
By: Yongmin Lee, Hye Won Chung
Potential Business Impact:
Makes AI learn faster with fewer examples.
Multimodal dataset distillation aims to synthesize a small set of image-text pairs that enables efficient training of large-scale vision-language models. While dataset distillation has shown promise in unimodal tasks, extending it to multimodal contrastive learning presents key challenges: learning cross-modal alignment and managing the high computational cost of large encoders. Prior approaches address scalability by freezing the text encoder and update only the image encoder and text projection layer. However, we find this severely limits semantic alignment and becomes a bottleneck for performance scaling. We propose CovMatch, a scalable dataset distillation framework that aligns the cross-covariance of real and synthetic features while regularizing feature distributions within each modality. Unlike prior approaches, CovMatch enables joint optimization of both encoders, leading to stronger cross-modal alignment and improved performance. Evaluated on Flickr30K and COCO, CovMatch outperforms state-of-the-art multimodal distillation methods and achieves up to 6.8% absolute gains in retrieval accuracy using only 500 synthetic pairs.
Similar Papers
Efficient Multimodal Dataset Distillation via Generative Models
CV and Pattern Recognition
Makes AI learn from pictures and words faster.
Decoupled Audio-Visual Dataset Distillation
CV and Pattern Recognition
Makes AI understand sounds and pictures together better.
Leveraging Multi-Modal Information to Enhance Dataset Distillation
CV and Pattern Recognition
Makes fake pictures teach computers better.