Score: 0

Fitting Image Diffusion Models on Video Datasets

Published: September 4, 2025 | arXiv ID: 2509.03794v1

By: Juhun Lee, Simon S. Woo

Potential Business Impact:

Makes AI create videos that look more real.

Business Areas:
Image Recognition Data and Analytics, Software

Image diffusion models are trained on independently sampled static images. While this is the bedrock task protocol in generative modeling, capturing the temporal world through the lens of static snapshots is information-deficient by design. This limitation leads to slower convergence, limited distributional coverage, and reduced generalization. In this work, we propose a simple and effective training strategy that leverages the temporal inductive bias present in continuous video frames to improve diffusion training. Notably, the proposed method requires no architectural modification and can be seamlessly integrated into standard diffusion training pipelines. We evaluate our method on the HandCo dataset, where hand-object interactions exhibit dense temporal coherence and subtle variations in finger articulation often result in semantically distinct motions. Empirically, our method accelerates convergence by over 2$\text{x}$ faster and achieves lower FID on both training and validation distributions. It also improves generative diversity by encouraging the model to capture meaningful temporal variations. We further provide an optimization analysis showing that our regularization reduces the gradient variance, which contributes to faster convergence.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition