Score: 1

GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning

Published: September 22, 2025 | arXiv ID: 2509.17437v1

By: Guizhen Chen , Weiwen Xu , Hao Zhang and more

Potential Business Impact:

Helps AI understand pictures for better problem-solving.

Business Areas:
Geospatial Data and Analytics, Navigation and Mapping

Recent advancements in reinforcement learning (RL) have enhanced the reasoning abilities of large language models (LLMs), yet the impact on multimodal LLMs (MLLMs) is limited. Particularly in vision-intensive tasks like geometric reasoning, MLLMs hallucinate frequently, leading to inaccurate reasoning. We attribute this to the perceptual bottleneck in MLLMs, which caps the benefits of reasoning training. To quantify this, we design a Geo-Perception Question-Answering (GeoPQA) benchmark, targeting basic geometric concepts and spatial relationships. Experiments on GeoPQA reveal significant shortcomings of MLLMs in visual perception, which constrain RL reward signals for effective training. To address this bottleneck, we propose a two-stage RL training framework by first enhancing the visual perception of geometric structures, then fostering reasoning capabilities. Applied to Qwen2.5-VL-3B-Instruct, our two-stage training improves geometric reasoning by 9.7% and geometric problem solving by 9.1%, compared to the direct reasoning training approach. Our method also generalizes to other vision-intensive domains like figure understanding, highlighting the importance of perceptual grounding in effective MLLM reasoning.

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Computation and Language