Score: 1

HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives

Published: October 23, 2025 | arXiv ID: 2510.20822v1

By: Yihao Meng , Hao Ouyang , Yue Yu and more

Potential Business Impact:

Makes computers create whole movies, not just short clips.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

State-of-the-art text-to-video models excel at generating isolated clips but fall short of creating the coherent, multi-shot narratives, which are the essence of storytelling. We bridge this "narrative gap" with HoloCine, a model that generates entire scenes holistically to ensure global consistency from the first shot to the last. Our architecture achieves precise directorial control through a Window Cross-Attention mechanism that localizes text prompts to specific shots, while a Sparse Inter-Shot Self-Attention pattern (dense within shots but sparse between them) ensures the efficiency required for minute-scale generation. Beyond setting a new state-of-the-art in narrative coherence, HoloCine develops remarkable emergent abilities: a persistent memory for characters and scenes, and an intuitive grasp of cinematic techniques. Our work marks a pivotal shift from clip synthesis towards automated filmmaking, making end-to-end cinematic creation a tangible future. Our code is available at: https://holo-cine.github.io/.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition