Score: 0

CineTrans: Learning to Generate Videos with Cinematic Transitions via Masked Diffusion Models

Published: August 15, 2025 | arXiv ID: 2508.11484v1

By: Xiaoxue Wu , Bingjie Gao , Yu Qiao and more

Potential Business Impact:

Makes videos smoothly change scenes like movies.

Despite significant advances in video synthesis, research into multi-shot video generation remains in its infancy. Even with scaled-up models and massive datasets, the shot transition capabilities remain rudimentary and unstable, largely confining generated videos to single-shot sequences. In this work, we introduce CineTrans, a novel framework for generating coherent multi-shot videos with cinematic, film-style transitions. To facilitate insights into the film editing style, we construct a multi-shot video-text dataset Cine250K with detailed shot annotations. Furthermore, our analysis of existing video diffusion models uncovers a correspondence between attention maps in the diffusion model and shot boundaries, which we leverage to design a mask-based control mechanism that enables transitions at arbitrary positions and transfers effectively in a training-free setting. After fine-tuning on our dataset with the mask mechanism, CineTrans produces cinematic multi-shot sequences while adhering to the film editing style, avoiding unstable transitions or naive concatenations. Finally, we propose specialized evaluation metrics for transition control, temporal consistency and overall quality, and demonstrate through extensive experiments that CineTrans significantly outperforms existing baselines across all criteria.

Country of Origin
🇨🇳 China

Page Count
27 pages

Category
Computer Science:
CV and Pattern Recognition