Can Vision-Language Models Solve Visual Math Equations?
By: Monjoy Narayan Choudhury , Junling Wang , Yifan Hou and more
Potential Business Impact:
Teaches computers to solve math problems in pictures.
Despite strong performance in visual understanding and language-based reasoning, Vision-Language Models (VLMs) struggle with tasks requiring integrated perception and symbolic computation. We study this limitation through visual equation solving, where mathematical equations are embedded in images, variables are represented by object icons, and coefficients must be inferred by counting. While VLMs perform well on textual equations, they fail on visually grounded counterparts. To understand this gap, we decompose the task into coefficient counting and variable recognition, and find that counting is the primary bottleneck, even when recognition is accurate. We also observe that composing recognition and reasoning introduces additional errors, highlighting challenges in multi-step visual reasoning. Finally, as equation complexity increases, symbolic reasoning itself becomes a limiting factor. These findings reveal key weaknesses in current VLMs and point toward future improvements in visually grounded mathematical reasoning.
Similar Papers
MathSight: A Benchmark Exploring Have Vision-Language Models Really Seen in University-Level Mathematical Reasoning?
CV and Pattern Recognition
Tests if computers *really* see math problems.
Your Vision-Language Model Can't Even Count to 20: Exposing the Failures of VLMs in Compositional Counting
CV and Pattern Recognition
AI struggles to count mixed objects accurately.
Synthesizing Visual Concepts as Vision-Language Programs
Artificial Intelligence
Makes AI understand pictures and think logically.