Real-Time Motion-Controllable Autoregressive Video Diffusion
By: Kesen Zhao , Jiaxin Shi , Beier Zhu and more
Potential Business Impact:
Makes videos move exactly how you want, fast.
Real-time motion-controllable video generation remains challenging due to the inherent latency of bidirectional diffusion models and the lack of effective autoregressive (AR) approaches. Existing AR video diffusion models are limited to simple control signals or text-to-video generation, and often suffer from quality degradation and motion artifacts in few-step generation. To address these challenges, we propose AR-Drag, the first RL-enhanced few-step AR video diffusion model for real-time image-to-video generation with diverse motion control. We first fine-tune a base I2V model to support basic motion control, then further improve it via reinforcement learning with a trajectory-based reward model. Our design preserves the Markov property through a Self-Rollout mechanism and accelerates training by selectively introducing stochasticity in denoising steps. Extensive experiments demonstrate that AR-Drag achieves high visual fidelity and precise motion alignment, significantly reducing latency compared with state-of-the-art motion-controllable VDMs, while using only 1.3B parameters. Additional visualizations can be found on our project page: https://kesenzhao.github.io/AR-Drag.github.io/.
Similar Papers
AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion
CV and Pattern Recognition
Makes videos that look real and flow smoothly.
Recurrent Autoregressive Diffusion: Global Memory Meets Local Attention
CV and Pattern Recognition
Lets AI remember and create longer videos.
VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory
CV and Pattern Recognition
Creates longer, smoother, and more varied videos.