VRBench: A Benchmark for Multi-Step Reasoning in Long Narrative Videos
By: Jiashuo Yu , Yue Wu , Meng Chu and more
Potential Business Impact:
Tests AI's ability to understand long stories.
We present VRBench, the first long narrative video benchmark crafted for evaluating large models' multi-step reasoning capabilities, addressing limitations in existing evaluations that overlook temporal reasoning and procedural validity. It comprises 960 long videos (with an average duration of 1.6 hours), along with 8,243 human-labeled multi-step question-answering pairs and 25,106 reasoning steps with timestamps. These videos are curated via a multi-stage filtering process including expert inter-rater reviewing to prioritize plot coherence. We develop a human-AI collaborative framework that generates coherent reasoning chains, each requiring multiple temporally grounded steps, spanning seven types (e.g., event attribution, implicit inference). VRBench designs a multi-phase evaluation pipeline that assesses models at both the outcome and process levels. Apart from the MCQs for the final results, we propose a progress-level LLM-guided scoring metric to evaluate the quality of the reasoning chain from multiple dimensions comprehensively. Through extensive evaluations of 12 LLMs and 19 VLMs on VRBench, we undertake a thorough analysis and provide valuable insights that advance the field of multi-step reasoning.
Similar Papers
VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models
CV and Pattern Recognition
Teaches computers to understand cause and effect in videos.
V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models
CV and Pattern Recognition
Tests how well AI understands videos.
RVTBench: A Benchmark for Visual Reasoning Tasks
CV and Pattern Recognition
Teaches computers to understand videos like people.