CineLOG: A Training Free Approach for Cinematic Long Video Generation
By: Zahra Dehghanian , Morteza Abolghasemi , Hamid Beigy and more
Potential Business Impact:
Makes videos follow camera directions and styles.
Controllable video synthesis is a central challenge in computer vision, yet current models struggle with fine grained control beyond textual prompts, particularly for cinematic attributes like camera trajectory and genre. Existing datasets often suffer from severe data imbalance, noisy labels, or a significant simulation to real gap. To address this, we introduce CineLOG, a new dataset of 5,000 high quality, balanced, and uncut video clips. Each entry is annotated with a detailed scene description, explicit camera instructions based on a standard cinematic taxonomy, and genre label, ensuring balanced coverage across 17 diverse camera movements and 15 film genres. We also present our novel pipeline designed to create this dataset, which decouples the complex text to video (T2V) generation task into four easier stages with more mature technology. To enable coherent, multi shot sequences, we introduce a novel Trajectory Guided Transition Module that generates smooth spatio-temporal interpolation. Extensive human evaluations show that our pipeline significantly outperforms SOTA end to end T2V models in adhering to specific camera and screenplay instructions, while maintaining professional visual quality. All codes and data are available at https://cine-log.pages.dev.
Similar Papers
Generative Photographic Control for Scene-Consistent Video Cinematic Editing
CV and Pattern Recognition
Lets you change movie look like a pro.
LongVie 2: Multimodal Controllable Ultra-Long Video World Model
CV and Pattern Recognition
Makes videos that stay real and make sense.
HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives
CV and Pattern Recognition
Makes computers create whole movies, not just short clips.