EgoLCD: Egocentric Video Generation with Long Context Diffusion
By: Liuzhou Zhang , Jiarui Ye , Yuanlei Wang and more
Potential Business Impact:
Creates long, realistic videos from a person's view.
Generating long, coherent egocentric videos is difficult, as hand-object interactions and procedural tasks require reliable long-term memory. Existing autoregressive models suffer from content drift, where object identity and scene semantics degrade over time. To address this challenge, we introduce EgoLCD, an end-to-end framework for egocentric long-context video generation that treats long video synthesis as a problem of efficient and stable memory management. EgoLCD combines a Long-Term Sparse KV Cache for stable global context with an attention-based short-term memory, extended by LoRA for local adaptation. A Memory Regulation Loss enforces consistent memory usage, and Structured Narrative Prompting provides explicit temporal guidance. Extensive experiments on the EgoVid-5M benchmark demonstrate that EgoLCD achieves state-of-the-art performance in both perceptual quality and temporal consistency, effectively mitigating generative forgetting and representing a significant step toward building scalable world models for embodied AI. Code: https://github.com/AIGeeksGroup/EgoLCD. Website: https://aigeeksgroup.github.io/EgoLCD.
Similar Papers
RELIC: Interactive Video World Model with Long-Horizon Memory
CV and Pattern Recognition
Lets computers explore virtual worlds for a long time.
VideoLucy: Deep Memory Backtracking for Long Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.