Generative Dataset Distillation using Min-Max Diffusion Model
By: Junqiao Fan , Yunjiao Zhou , Min Chang Jordan Ren and more
Potential Business Impact:
Makes AI learn from fewer fake pictures.
In this paper, we address the problem of generative dataset distillation that utilizes generative models to synthesize images. The generator may produce any number of images under a preserved evaluation time. In this work, we leverage the popular diffusion model as the generator to compute a surrogate dataset, boosted by a min-max loss to control the dataset's diversity and representativeness during training. However, the diffusion model is time-consuming when generating images, as it requires an iterative generation process. We observe a critical trade-off between the number of image samples and the image quality controlled by the diffusion steps and propose Diffusion Step Reduction to achieve optimal performance. This paper details our comprehensive method and its performance. Our model achieved $2^{nd}$ place in the generative track of \href{https://www.dd-challenge.com/#/}{The First Dataset Distillation Challenge of ECCV2024}, demonstrating its superior performance.
Similar Papers
Revisiting Diffusion Models: From Generative Pre-training to One-Step Generation
Machine Learning (CS)
Makes AI create pictures much faster.
Efficient Multimodal Dataset Distillation via Generative Models
CV and Pattern Recognition
Makes AI learn from pictures and words faster.
Dataset Distillation with Probabilistic Latent Features
CV and Pattern Recognition
Makes big computer brains learn with less data.