STAGE: Storyboard-Anchored Generation for Cinematic Multi-shot Narrative
By: Peixuan Zhang , Zijian Jia , Kaiqi Liu and more
Potential Business Impact:
Makes videos tell stories with consistent characters.
While recent advancements in generative models have achieved remarkable visual fidelity in video synthesis, creating coherent multi-shot narratives remains a significant challenge. To address this, keyframe-based approaches have emerged as a promising alternative to computationally intensive end-to-end methods, offering the advantages of fine-grained control and greater efficiency. However, these methods often fail to maintain cross-shot consistency and capture cinematic language. In this paper, we introduce STAGE, a SToryboard-Anchored GEneration workflow to reformulate the keyframe-based multi-shot video generation task. Instead of using sparse keyframes, we propose STEP2 to predict a structural storyboard composed of start-end frame pairs for each shot. We introduce the multi-shot memory pack to ensure long-range entity consistency, the dual-encoding strategy for intra-shot coherence, and the two-stage training scheme to learn cinematic inter-shot transition. We also contribute the large-scale ConStoryBoard dataset, including high-quality movie clips with fine-grained annotations for story progression, cinematic attributes, and human preferences. Extensive experiments demonstrate that STAGE achieves superior performance in structured narrative control and cross-shot coherence.
Similar Papers
OneStory: Coherent Multi-Shot Video Generation with Adaptive Memory
CV and Pattern Recognition
Creates longer, connected stories in videos.
STAGE: A Stream-Centric Generative World Model for Long-Horizon Driving-Scene Simulation
CV and Pattern Recognition
Makes self-driving cars create long, clear videos.
ShotDirector: Directorially Controllable Multi-Shot Video Generation with Cinematographic Transitions
CV and Pattern Recognition
Makes videos look like movies with better scene changes.