Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval
By: Jiwen Yu , Jianhong Bai , Yiran Qin and more
Potential Business Impact:
Makes videos remember past scenes for longer.
Recent advances in interactive video generation have shown promising results, yet existing approaches struggle with scene-consistent memory capabilities in long video generation due to limited use of historical context. In this work, we propose Context-as-Memory, which utilizes historical context as memory for video generation. It includes two simple yet effective designs: (1) storing context in frame format without additional post-processing; (2) conditioning by concatenating context and frames to be predicted along the frame dimension at the input, requiring no external control modules. Furthermore, considering the enormous computational overhead of incorporating all historical context, we propose the Memory Retrieval module to select truly relevant context frames by determining FOV (Field of View) overlap between camera poses, which significantly reduces the number of candidate frames without substantial information loss. Experiments demonstrate that Context-as-Memory achieves superior memory capabilities in interactive long video generation compared to SOTAs, even generalizing effectively to open-domain scenarios not seen during training. The link of our project page is https://context-as-memory.github.io/.
Similar Papers
Mixture of Contexts for Long Video Generation
Graphics
Makes videos remember stories for minutes.
Video World Models with Long-term Spatial Memory
CV and Pattern Recognition
Keeps computer-made videos consistent over time.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.