Score: 2

CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video Models

Published: June 11, 2025 | arXiv ID: 2506.09943v1

By: Aaron Foss , Chloe Evans , Sasha Mitts and more

BigTech Affiliations: Meta

Potential Business Impact:

Teaches computers to understand cause and effect.

Business Areas:
Virtual Reality Hardware, Software

We introduce CausalVQA, a benchmark dataset for video question answering (VQA) composed of question-answer pairs that probe models' understanding of causality in the physical world. Existing VQA benchmarks either tend to focus on surface perceptual understanding of real-world videos, or on narrow physical reasoning questions created using simulation environments. CausalVQA fills an important gap by presenting challenging questions that are grounded in real-world scenarios, while focusing on models' ability to predict the likely outcomes of different actions and events through five question types: counterfactual, hypothetical, anticipation, planning and descriptive. We designed quality control mechanisms that prevent models from exploiting trivial shortcuts, requiring models to base their answers on deep visual understanding instead of linguistic cues. We find that current frontier multimodal models fall substantially below human performance on the benchmark, especially on anticipation and hypothetical questions. This highlights a challenge for current systems to leverage spatial-temporal reasoning, understanding of physical principles, and comprehension of possible alternatives to make accurate predictions in real-world settings.

Country of Origin
🇺🇸 United States


Page Count
35 pages

Category
Computer Science:
CV and Pattern Recognition