GeoBench: Rethinking Multimodal Geometric Problem-Solving via Hierarchical Evaluation
By: Yuan Feng , Yue Yang , Xiaohan He and more
Geometric problem solving constitutes a critical branch of mathematical reasoning, requiring precise analysis of shapes and spatial relationships. Current evaluations of geometric reasoning in vision-language models (VLMs) face limitations, including the risk of test data contamination from textbook-based benchmarks, overemphasis on final answers over reasoning processes, and insufficient diagnostic granularity. To address these issues, we present GeoBench, a hierarchical benchmark featuring four reasoning levels in geometric problem-solving: Visual Perception, Goal-Oriented Planning, Rigorous Theorem Application, and Self-Reflective Backtracking. Through six formally verified tasks generated via TrustGeoGen, we systematically assess capabilities ranging from attribute extraction to logical error correction. Experiments reveal that while reasoning models like OpenAI-o3 outperform general MLLMs, performance declines significantly with increasing task complexity. Key findings demonstrate that sub-goal decomposition and irrelevant premise filtering critically influence final problem-solving accuracy, whereas Chain-of-Thought prompting unexpectedly degrades performance in some tasks. These findings establish GeoBench as a comprehensive benchmark while offering actionable guidelines for developing geometric problem-solving systems.
Similar Papers
GeoGramBench: Benchmarking the Geometric Program Reasoning in Modern LLMs
Artificial Intelligence
Teaches computers to understand drawings from code.
GTR-Bench: Evaluating Geo-Temporal Reasoning in Vision-Language Models
CV and Pattern Recognition
Helps AI understand where things are moving.
GeoSense: Evaluating Identification and Application of Geometric Principles in Multimodal Reasoning
Computation and Language
Helps computers understand and solve geometry problems.