Latent Video Dataset Distillation
By: Ning Li , Antai Andy Liu , Jingran Zhang and more
Potential Business Impact:
Makes video AI learn faster from less data.
Dataset distillation has demonstrated remarkable effectiveness in high-compression scenarios for image datasets. While video datasets inherently contain greater redundancy, existing video dataset distillation methods primarily focus on compression in the pixel space, overlooking advances in the latent space that have been widely adopted in modern text-to-image and text-to-video models. In this work, we bridge this gap by introducing a novel video dataset distillation approach that operates in the latent space using a state-of-the-art variational encoder. Furthermore, we employ a diversity-aware data selection strategy to select both representative and diverse samples. Additionally, we introduce a simple, training-free method to further compress the distilled latent dataset. By combining these techniques, our approach achieves a new state-of-the-art performance in dataset distillation, outperforming prior methods on all datasets, e.g. on HMDB51 IPC 1, we achieve a 2.6% performance increase; on MiniUCF IPC 5, we achieve a 7.8% performance increase. Our code is available at https://github.com/liningresearch/Latent_Video_Dataset_Distillation.
Similar Papers
Video Dataset Condensation with Diffusion Models
CV and Pattern Recognition
Makes huge video collections much smaller.
Dataset Distillation with Probabilistic Latent Features
CV and Pattern Recognition
Makes big computer brains learn with less data.
Efficient Multimodal Dataset Distillation via Generative Models
CV and Pattern Recognition
Makes AI learn from pictures and words faster.