Score: 1

CovMatch: Cross-Covariance Guided Multimodal Dataset Distillation with Trainable Text Encoder

Published: October 21, 2025 | arXiv ID: 2510.18583v1

By: Yongmin Lee, Hye Won Chung

Potential Business Impact:

Makes AI learn faster with fewer examples.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal dataset distillation aims to synthesize a small set of image-text pairs that enables efficient training of large-scale vision-language models. While dataset distillation has shown promise in unimodal tasks, extending it to multimodal contrastive learning presents key challenges: learning cross-modal alignment and managing the high computational cost of large encoders. Prior approaches address scalability by freezing the text encoder and update only the image encoder and text projection layer. However, we find this severely limits semantic alignment and becomes a bottleneck for performance scaling. We propose CovMatch, a scalable dataset distillation framework that aligns the cross-covariance of real and synthetic features while regularizing feature distributions within each modality. Unlike prior approaches, CovMatch enables joint optimization of both encoders, leading to stronger cross-modal alignment and improved performance. Evaluated on Flickr30K and COCO, CovMatch outperforms state-of-the-art multimodal distillation methods and achieves up to 6.8% absolute gains in retrieval accuracy using only 500 synthetic pairs.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
CV and Pattern Recognition