Video Dataset Condensation with Diffusion Models
By: Zhe Li , Hadrien Reynaud , Mischa Dombrowski and more
Potential Business Impact:
Makes huge video collections much smaller.
In recent years, the rapid expansion of dataset sizes and the increasing complexity of deep learning models have significantly escalated the demand for computational resources, both for data storage and model training. Dataset distillation has emerged as a promising solution to address this challenge by generating a compact synthetic dataset that retains the essential information from a large real dataset. However, existing methods often suffer from limited performance and poor data quality, particularly in the video domain. In this paper, we focus on video dataset distillation by employing a video diffusion model to generate high-quality synthetic videos. To enhance representativeness, we introduce Video Spatio-Temporal U-Net (VST-UNet), a model designed to select a diverse and informative subset of videos that effectively captures the characteristics of the original dataset. To further optimize computational efficiency, we explore a training-free clustering algorithm, Temporal-Aware Cluster-based Distillation (TAC-DT), to select representative videos without requiring additional training overhead. We validate the effectiveness of our approach through extensive experiments on four benchmark datasets, demonstrating performance improvements of up to \(10.61\%\) over the state-of-the-art. Our method consistently outperforms existing approaches across all datasets, establishing a new benchmark for video dataset distillation.
Similar Papers
Latent Video Dataset Distillation
CV and Pattern Recognition
Makes video AI learn faster from less data.
Dynamic-Aware Video Distillation: Optimizing Temporal Resolution Based on Video Semantics
CV and Pattern Recognition
Makes video learning faster by removing extra frames.
Dataset Distillation with Probabilistic Latent Features
CV and Pattern Recognition
Makes big computer brains learn with less data.