MathSight: A Benchmark Exploring Have Vision-Language Models Really Seen in University-Level Mathematical Reasoning?
By: Yuandong Wang , Yao Cui , Yuxin Zhao and more
Potential Business Impact:
Tests if computers *really* see math problems.
Recent advances in Vision-Language Models (VLMs) have achieved impressive progress in multimodal mathematical reasoning. Yet, how much visual information truly contributes to reasoning remains unclear. Existing benchmarks report strong overall performance but seldom isolate the role of the image modality, leaving open whether VLMs genuinely leverage visual understanding or merely depend on linguistic priors. To address this, we present MathSight, a university-level multimodal mathematical reasoning benchmark designed to disentangle and quantify the effect of visual input. Each problem includes multiple visual variants -- original, hand-drawn, photo-captured -- and a text-only condition for controlled comparison. Experiments on state-of-the-art VLMs reveal a consistent trend: the contribution of visual information diminishes with increasing problem difficulty. Remarkably, Qwen3-VL without any image input surpasses both its multimodal variants and GPT-5, underscoring the need for benchmarks like MathSight to advance genuine vision-grounded reasoning in future models.
Similar Papers
Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency
CV and Pattern Recognition
Tests if computers can do math with pictures.
VisioMath: Benchmarking Figure-based Mathematical Reasoning in LMMs
Artificial Intelligence
Helps computers solve math problems with picture answers.
IQBench: How "Smart'' Are Vision-Language Models? A Study with Human IQ Tests
CV and Pattern Recognition
Tests computers' smarts on picture puzzles.