CineTrans: Learning to Generate Videos with Cinematic Transitions via Masked Diffusion Models
By: Xiaoxue Wu , Bingjie Gao , Yu Qiao and more
Potential Business Impact:
Makes videos smoothly change scenes like movies.
Despite significant advances in video synthesis, research into multi-shot video generation remains in its infancy. Even with scaled-up models and massive datasets, the shot transition capabilities remain rudimentary and unstable, largely confining generated videos to single-shot sequences. In this work, we introduce CineTrans, a novel framework for generating coherent multi-shot videos with cinematic, film-style transitions. To facilitate insights into the film editing style, we construct a multi-shot video-text dataset Cine250K with detailed shot annotations. Furthermore, our analysis of existing video diffusion models uncovers a correspondence between attention maps in the diffusion model and shot boundaries, which we leverage to design a mask-based control mechanism that enables transitions at arbitrary positions and transfers effectively in a training-free setting. After fine-tuning on our dataset with the mask mechanism, CineTrans produces cinematic multi-shot sequences while adhering to the film editing style, avoiding unstable transitions or naive concatenations. Finally, we propose specialized evaluation metrics for transition control, temporal consistency and overall quality, and demonstrate through extensive experiments that CineTrans significantly outperforms existing baselines across all criteria.
Similar Papers
ShotDirector: Directorially Controllable Multi-Shot Video Generation with Cinematographic Transitions
CV and Pattern Recognition
Makes videos look like movies with better scene changes.
Versatile Transition Generation with Image-to-Video Diffusion
CV and Pattern Recognition
Creates smooth video transitions between scenes.
MultiCOIN: Multi-Modal COntrollable Video INbetweening
CV and Pattern Recognition
Makes videos move exactly how you want.