Score: 2

Efficient Multimodal Dataset Distillation via Generative Models

Published: September 18, 2025 | arXiv ID: 2509.15472v1

By: Zhenghao Zhao , Haoxuan Wang , Junyi Wu and more

Potential Business Impact:

Makes AI learn from pictures and words faster.

Business Areas:
Text Analytics Data and Analytics, Software

Dataset distillation aims to synthesize a small dataset from a large dataset, enabling the model trained on it to perform well on the original dataset. With the blooming of large language models and multimodal large language models, the importance of multimodal datasets, particularly image-text datasets, has grown significantly. However, existing multimodal dataset distillation methods are constrained by the Matching Training Trajectories algorithm, which significantly increases the computing resource requirement, and takes days to process the distillation. In this work, we introduce EDGE, a generative distillation method for efficient multimodal dataset distillation. Specifically, we identify two key challenges of distilling multimodal datasets with generative models: 1) The lack of correlation between generated images and captions. 2) The lack of diversity among generated samples. To address the aforementioned issues, we propose a novel generative model training workflow with a bi-directional contrastive loss and a diversity loss. Furthermore, we propose a caption synthesis strategy to further improve text-to-image retrieval performance by introducing more text information. Our method is evaluated on Flickr30K, COCO, and CC3M datasets, demonstrating superior performance and efficiency compared to existing approaches. Notably, our method achieves results 18x faster than the state-of-the-art method.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition