VideoMemory: Toward Consistent Video Generation via Memory Integration
By: Jinsong Zhou , Yihua Du , Xinli Xu and more
Potential Business Impact:
Keeps video characters looking the same in every scene.
Maintaining consistent characters, props, and environments across multiple shots is a central challenge in narrative video generation. Existing models can produce high-quality short clips but often fail to preserve entity identity and appearance when scenes change or when entities reappear after long temporal gaps. We present VideoMemory, an entity-centric framework that integrates narrative planning with visual generation through a Dynamic Memory Bank. Given a structured script, a multi-agent system decomposes the narrative into shots, retrieves entity representations from memory, and synthesizes keyframes and videos conditioned on these retrieved states. The Dynamic Memory Bank stores explicit visual and semantic descriptors for characters, props, and backgrounds, and is updated after each shot to reflect story-driven changes while preserving identity. This retrieval-update mechanism enables consistent portrayal of entities across distant shots and supports coherent long-form generation. To evaluate this setting, we construct a 54-case multi-shot consistency benchmark covering character-, prop-, and background-persistent scenarios. Extensive experiments show that VideoMemory achieves strong entity-level coherence and high perceptual quality across diverse narrative sequences.
Similar Papers
StoryMem: Multi-shot Long Video Storytelling with Memory
CV and Pattern Recognition
Makes videos tell longer, consistent stories.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.
WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning
CV and Pattern Recognition
Lets computers understand very long videos better.