VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory
By: Yifei Yu , Xiaoshan Wu , Xinting Hu and more
Potential Business Impact:
Creates longer, smoother, and more varied videos.
Autoregressive (AR) diffusion enables streaming, interactive long-video generation by producing frames causally, yet maintaining coherence over minute-scale horizons remains challenging due to accumulated errors, motion drift, and content repetition. We approach this problem from a memory perspective, treating video synthesis as a recurrent dynamical process that requires coordinated short- and long-term context. We propose VideoSSM, a Long Video Model that unifies AR diffusion with a hybrid state-space memory. The state-space model (SSM) serves as an evolving global memory of scene dynamics across the entire sequence, while a context window provides local memory for motion cues and fine details. This hybrid design preserves global consistency without frozen, repetitive patterns, supports prompt-adaptive interaction, and scales in linear time with sequence length. Experiments on short- and long-range benchmarks demonstrate state-of-the-art temporal consistency and motion stability among autoregressive video generator especially at minute-scale horizons, enabling content diversity and interactive prompt-based control, thereby establishing a scalable, memory-aware framework for long video generation.
Similar Papers
Time-Scaling State-Space Models for Dense Video Captioning
CV and Pattern Recognition
Lets computers describe long videos as they happen.
Recurrent Autoregressive Diffusion: Global Memory Meets Local Attention
CV and Pattern Recognition
Lets AI remember and create longer videos.
Macro-from-Micro Planning for High-Quality and Parallelized Autoregressive Long Video Generation
CV and Pattern Recognition
Makes computers create much longer, smoother videos.