MVP: Enhancing Video Large Language Models via Self-supervised Masked Video Prediction
By: Xiaokun Sun , Zezhong Wu , Zewen Ding and more
Potential Business Impact:
Teaches computers to understand video time.
Reinforcement learning based post-training paradigms for Video Large Language Models (VideoLLMs) have achieved significant success by optimizing for visual-semantic tasks such as captioning or VideoQA. However, while these approaches effectively enhance perception abilities, they primarily target holistic content understanding, often lacking explicit supervision for intrinsic temporal coherence and inter-frame correlations. This tendency limits the models' ability to capture intricate dynamics and fine-grained visual causality. To explicitly bridge this gap, we propose a novel post-training objective: Masked Video Prediction (MVP). By requiring the model to reconstruct a masked continuous segment from a set of challenging distractors, MVP forces the model to attend to the sequential logic and temporal context of events. To support scalable training, we introduce a scalable data synthesis pipeline capable of transforming arbitrary video corpora into MVP training samples, and further employ Group Relative Policy Optimization (GRPO) with a fine-grained reward function to enhance the model's understanding of video context and temporal properties. Comprehensive evaluations demonstrate that MVP enhances video reasoning capabilities by directly reinforcing temporal reasoning and causal understanding.
Similar Papers
InternVideo-Next: Towards General Video Foundation Models without Video-Text Supervision
CV and Pattern Recognition
Teaches computers to understand videos like humans.
MVP: Winning Solution to SMP Challenge 2025 Video Track
CV and Pattern Recognition
Predicts which videos will be popular online.
ViSS-R1: Self-Supervised Reinforcement Video Reasoning
CV and Pattern Recognition
Makes computers understand videos by watching them closely.