What Happens Next? Next Scene Prediction with a Unified Video Model
By: Xinjie Li , Zhimin Chen , Rui Zhao and more
Potential Business Impact:
Helps computers guess what happens next in videos.
Recent unified models for joint understanding and generation have significantly advanced visual generation capabilities. However, their focus on conventional tasks like text-to-video generation has left the temporal reasoning potential of unified models largely underexplored. To address this gap, we introduce Next Scene Prediction (NSP), a new task that pushes unified video models toward temporal and causal reasoning. Unlike text-to-video generation, NSP requires predicting plausible futures from preceding context, demanding deeper understanding and reasoning. To tackle this task, we propose a unified framework combining Qwen-VL for comprehension and LTX for synthesis, bridged by a latent query embedding and a connector module. This model is trained in three stages on our newly curated, large-scale NSP dataset: text-to-video pre-training, supervised fine-tuning, and reinforcement learning (via GRPO) with our proposed causal consistency reward. Experiments demonstrate our model achieves state-of-the-art performance on our benchmark, advancing the capability of generalist multimodal systems to anticipate what happens next.
Similar Papers
Video-as-Answer: Predict and Generate Next Video Event with Joint-GRPO
CV and Pattern Recognition
Shows how to do things with video answers.
InternVideo-Next: Towards General Video Foundation Models without Video-Text Supervision
CV and Pattern Recognition
Teaches computers to understand videos like humans.
Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization
CV and Pattern Recognition
Finds fake videos by predicting what happens next.