Score: 2

GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning

Published: June 1, 2025 | arXiv ID: 2506.00785v3

By: Sahiti Yerramilli , Nilay Pande , Rynaa Grover and more

BigTech Affiliations: Waymo Google

Potential Business Impact:

Teaches computers to understand maps and locations.

Business Areas:
Geospatial Data and Analytics, Navigation and Mapping

This paper introduces GeoChain, a large-scale benchmark for evaluating step-by-step geographic reasoning in multimodal large language models (MLLMs). Leveraging 1.46 million Mapillary street-level images, GeoChain pairs each image with a 21-step chain-of-thought (CoT) question sequence (over 30 million Q&A pairs). These sequences guide models from coarse attributes to fine-grained localization across four reasoning categories - visual, spatial, cultural, and precise geolocation - annotated by difficulty. Images are also enriched with semantic segmentation (150 classes) and a visual locatability score. Our benchmarking of contemporary MLLMs (GPT-4.1 variants, Claude 3.7, Gemini 2.5 variants) on a diverse 2,088-image subset reveals consistent challenges: models frequently exhibit weaknesses in visual grounding, display erratic reasoning, and struggle to achieve accurate localization, especially as the reasoning complexity escalates. GeoChain offers a robust diagnostic methodology, critical for fostering significant advancements in complex geographic reasoning within MLLMs.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence