V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models
By: Yang Luo , Xuanlei Zhao , Baijiong Lin and more
Potential Business Impact:
Tests how well AI understands videos.
Recent progress in generative video models, such as Veo-3, has shown surprising zero-shot reasoning abilities, creating a growing need for systematic and reliable evaluation. We introduce V-ReasonBench, a benchmark designed to assess video reasoning across four key dimensions: structured problem-solving, spatial cognition, pattern-based inference, and physical dynamics. The benchmark is built from both synthetic and real-world image sequences and provides a diverse set of answer-verifiable tasks that are reproducible, scalable, and unambiguous. Evaluations of six state-of-the-art video models reveal clear dimension-wise differences, with strong variation in structured, spatial, pattern-based, and physical reasoning. We further compare video models with strong image models, analyze common hallucination behaviors, and study how video duration affects Chain-of-Frames reasoning. Overall, V-ReasonBench offers a unified and reproducible framework for measuring video reasoning and aims to support the development of models with more reliable, human-aligned reasoning skills.
Similar Papers
TiViBench: Benchmarking Think-in-Video Reasoning for Video Generative Models
CV and Pattern Recognition
Tests if AI can make videos that make sense.
Benchmarking Scientific Understanding and Reasoning for Video Generation using VideoScience-Bench
CV and Pattern Recognition
Teaches computers to make science videos correctly.
Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks
CV and Pattern Recognition
Helps computers solve puzzles by watching videos.