Score: 1

EgoLCD: Egocentric Video Generation with Long Context Diffusion

Published: December 4, 2025 | arXiv ID: 2512.04515v1

By: Liuzhou Zhang , Jiarui Ye , Yuanlei Wang and more

Potential Business Impact:

Creates long, realistic videos from a person's view.

Business Areas:
Motion Capture Media and Entertainment, Video

Generating long, coherent egocentric videos is difficult, as hand-object interactions and procedural tasks require reliable long-term memory. Existing autoregressive models suffer from content drift, where object identity and scene semantics degrade over time. To address this challenge, we introduce EgoLCD, an end-to-end framework for egocentric long-context video generation that treats long video synthesis as a problem of efficient and stable memory management. EgoLCD combines a Long-Term Sparse KV Cache for stable global context with an attention-based short-term memory, extended by LoRA for local adaptation. A Memory Regulation Loss enforces consistent memory usage, and Structured Narrative Prompting provides explicit temporal guidance. Extensive experiments on the EgoVid-5M benchmark demonstrate that EgoLCD achieves state-of-the-art performance in both perceptual quality and temporal consistency, effectively mitigating generative forgetting and representing a significant step toward building scalable world models for embodied AI. Code: https://github.com/AIGeeksGroup/EgoLCD. Website: https://aigeeksgroup.github.io/EgoLCD.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition