Interpretable Physics Reasoning and Performance Taxonomy in Vision-Language Models
By: Pranav Pawar , Kavish Shah , Akshat Bhalani and more
Potential Business Impact:
Tests if computers understand how things move.
As Vision-Language Models (VLMs) grow in sophistication, their ability to perform reasoning is coming under increasing supervision. While they excel at many tasks, their grasp of fundamental scientific principles, such as physics, remains an underexplored frontier. To reflect the advancements in these capabilities, we introduce a novel and accessible framework designed to rigorously evaluate VLMs on their understanding of 2D physics. Our framework features a pragmatic scenario generator that creates a diverse testbed of over 400 problems across four core domains: Projectile Motion, Collision Dynamics, Mechanics, and Fluid Dynamics. Through comprehensive evaluation of four state-of-the-art VLMs, we demonstrate a strong correlation between model scale and reasoning ability, with our top-performing model, Qwen2.5-VL-7B, achieving an overall score of 0.815. We find that while models excel at formulaic problems, they struggle significantly with domains requiring abstract spatial reasoning. By designing this framework, we aim to democratize the study of scientific reasoning in VLMs and foster deeper insights into their capabilities and limitations.
Similar Papers
DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning
Artificial Intelligence
Teaches computers to understand how things move.
From Diagnosis to Improvement: Probing Spatio-Physical Reasoning in Vision Language Models
CV and Pattern Recognition
Teaches computers to understand how things move.
Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling
Robotics
Robots learn to move objects by seeing forces.