VIPER: Process-aware Evaluation for Generative Video Reasoning
By: Yifan Li , Yukai Gu , Yingqian Min and more
Potential Business Impact:
Tests if AI videos show real thinking, not tricks.
Recent breakthroughs in video generation have demonstrated an emerging capability termed Chain-of-Frames (CoF) reasoning, where models resolve complex tasks through the generation of continuous frames. While these models show promise for Generative Video Reasoning (GVR), existing evaluation frameworks often rely on single-frame assessments, which can lead to outcome-hacking, where a model reaches a correct conclusion through an erroneous process. To address this, we propose a process-aware evaluation paradigm. We introduce VIPER, a comprehensive benchmark spanning 16 tasks across temporal, structural, symbolic, spatial, physics, and planning reasoning. Furthermore, we propose Process-outcome Consistency (POC@r), a new metric that utilizes VLM-as-Judge with a hierarchical rubric to evaluate both the validity of the intermediate steps and the final result. Our experiments reveal that state-of-the-art video models achieve only about 20% POC@1.0 and exhibit a significant outcome-hacking. We further explore the impact of test-time scaling and sampling robustness, highlighting a substantial gap between current video generation and true generalized visual reasoning. Our benchmark will be publicly released.
Similar Papers
V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models
CV and Pattern Recognition
Tests how well AI understands videos.
Can World Simulators Reason? Gen-ViRe: A Generative Visual Reasoning Benchmark
CV and Pattern Recognition
Tests how well videos can think and plan.
VIPER: Visual Perception and Explainable Reasoning for Sequential Decision-Making
Machine Learning (CS)
Lets robots follow spoken instructions to do tasks.