Score: 0

End-to-End Training for Autoregressive Video Diffusion via Self-Resampling

Published: December 17, 2025 | arXiv ID: 2512.15702v1

By: Yuwei Guo , Ceyuan Yang , Hao He and more

Potential Business Impact:

Makes videos look real, even long ones.

Business Areas:
Autonomous Vehicles Transportation

Autoregressive video diffusion models hold promise for world simulation but are vulnerable to exposure bias arising from the train-test mismatch. While recent works address this via post-training, they typically rely on a bidirectional teacher model or online discriminator. To achieve an end-to-end solution, we introduce Resampling Forcing, a teacher-free framework that enables training autoregressive video models from scratch and at scale. Central to our approach is a self-resampling scheme that simulates inference-time model errors on history frames during training. Conditioned on these degraded histories, a sparse causal mask enforces temporal causality while enabling parallel training with frame-level diffusion loss. To facilitate efficient long-horizon generation, we further introduce history routing, a parameter-free mechanism that dynamically retrieves the top-k most relevant history frames for each query. Experiments demonstrate that our approach achieves performance comparable to distillation-based baselines while exhibiting superior temporal consistency on longer videos owing to native-length training.

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition