Recurrent Autoregressive Diffusion: Global Memory Meets Local Attention
By: Taiye Chen , Zihan Ding , Anjian Li and more
Potential Business Impact:
Lets AI remember and create longer videos.
Recent advancements in video generation have demonstrated the potential of using video diffusion models as world models, with autoregressive generation of infinitely long videos through masked conditioning. However, such models, usually with local full attention, lack effective memory compression and retrieval for long-term generation beyond the window size, leading to issues of forgetting and spatiotemporal inconsistencies. To enhance the retention of historical information within a fixed memory budget, we introduce a recurrent neural network (RNN) into the diffusion transformer framework. Specifically, a diffusion model incorporating LSTM with attention achieves comparable performance to state-of-the-art RNN blocks, such as TTT and Mamba2. Moreover, existing diffusion-RNN approaches often suffer from performance degradation due to training-inference gap or the lack of overlap across windows. To address these limitations, we propose a novel Recurrent Autoregressive Diffusion (RAD) framework, which executes frame-wise autoregression for memory update and retrieval, consistently across training and inference time. Experiments on Memory Maze and Minecraft datasets demonstrate the superiority of RAD for long video generation, highlighting the efficiency of LSTM in sequence modeling.
Similar Papers
VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory
CV and Pattern Recognition
Creates longer, smoother, and more varied videos.
AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion
CV and Pattern Recognition
Makes videos that look real and flow smoothly.
Memory Forcing: Spatio-Temporal Memory for Consistent Scene Generation on Minecraft
CV and Pattern Recognition
Makes game worlds remember past actions for better play.