MultiShotMaster: A Controllable Multi-Shot Video Generation Framework
By: Qinghe Wang , Xiaoyu Shi , Baolu Li and more
Potential Business Impact:
Creates movie-like videos with many scenes.
Current video generation techniques excel at single-shot clips but struggle to produce narrative multi-shot videos, which require flexible shot arrangement, coherent narrative, and controllability beyond text prompts. To tackle these challenges, we propose MultiShotMaster, a framework for highly controllable multi-shot video generation. We extend a pretrained single-shot model by integrating two novel variants of RoPE. First, we introduce Multi-Shot Narrative RoPE, which applies explicit phase shift at shot transitions, enabling flexible shot arrangement while preserving the temporal narrative order. Second, we design Spatiotemporal Position-Aware RoPE to incorporate reference tokens and grounding signals, enabling spatiotemporal-grounded reference injection. In addition, to overcome data scarcity, we establish an automated data annotation pipeline to extract multi-shot videos, captions, cross-shot grounding signals and reference images. Our framework leverages the intrinsic architectural properties to support multi-shot video generation, featuring text-driven inter-shot consistency, customized subject with motion control, and background-driven customized scene. Both shot count and duration are flexibly configurable. Extensive experiments demonstrate the superior performance and outstanding controllability of our framework.
Similar Papers
ShotDirector: Directorially Controllable Multi-Shot Video Generation with Cinematographic Transitions
CV and Pattern Recognition
Makes videos look like movies with better scene changes.
OneStory: Coherent Multi-Shot Video Generation with Adaptive Memory
CV and Pattern Recognition
Creates longer, connected stories in videos.
EchoShot: Multi-Shot Portrait Video Generation
CV and Pattern Recognition
Creates consistent, customizable videos of people.