Efficient Multimodal Dataset Distillation via Generative Models
By: Zhenghao Zhao , Haoxuan Wang , Junyi Wu and more
Potential Business Impact:
Makes AI learn from pictures and words faster.
Dataset distillation aims to synthesize a small dataset from a large dataset, enabling the model trained on it to perform well on the original dataset. With the blooming of large language models and multimodal large language models, the importance of multimodal datasets, particularly image-text datasets, has grown significantly. However, existing multimodal dataset distillation methods are constrained by the Matching Training Trajectories algorithm, which significantly increases the computing resource requirement, and takes days to process the distillation. In this work, we introduce EDGE, a generative distillation method for efficient multimodal dataset distillation. Specifically, we identify two key challenges of distilling multimodal datasets with generative models: 1) The lack of correlation between generated images and captions. 2) The lack of diversity among generated samples. To address the aforementioned issues, we propose a novel generative model training workflow with a bi-directional contrastive loss and a diversity loss. Furthermore, we propose a caption synthesis strategy to further improve text-to-image retrieval performance by introducing more text information. Our method is evaluated on Flickr30K, COCO, and CC3M datasets, demonstrating superior performance and efficiency compared to existing approaches. Notably, our method achieves results 18x faster than the state-of-the-art method.
Similar Papers
Leveraging Multi-Modal Information to Enhance Dataset Distillation
CV and Pattern Recognition
Makes fake pictures teach computers better.
Multi-Modal Dataset Distillation in the Wild
CV and Pattern Recognition
Cleans messy internet data for smarter AI.
Distribution-aware Dataset Distillation for Efficient Image Restoration
CV and Pattern Recognition
Trains AI to fix blurry pictures faster.