Score: 1

CountQA: How Well Do MLLMs Count in the Wild?

Published: August 8, 2025 | arXiv ID: 2508.06585v2

By: Jayant Sravan Tamarapalli , Rynaa Grover , Nilay Pande and more

BigTech Affiliations: Waymo Google

Potential Business Impact:

Helps computers accurately count objects in pictures.

Multimodal Large Language Models (MLLMs) demonstrate remarkable fluency in understanding visual scenes, yet they exhibit a critical lack in a fundamental cognitive skill: object counting. This blind spot severely limits their reliability in real-world applications. To date, this capability has been largely unevaluated in complex scenarios, as existing benchmarks either feature sparse object densities or are confined to specific visual domains, failing to test models under realistic conditions. Addressing this gap, we introduce CountQA, a challenging new benchmark designed to probe this deficiency. Comprising over 1,500 question-answer pairs, CountQA features real-world images with high object density, clutter, and occlusion. We investigate this weakness by evaluating 15 prominent MLLMs on the CountQA benchmark and reveal that the top-performing model achieves a mere 42.9% accuracy, with performance declining as object counts rise. By providing a dedicated benchmark to diagnose and rectify this core weakness, CountQA paves the way for a new generation of MLLMs that are not only descriptively fluent but also numerically grounded and spatially aware. We will open-source the dataset and code upon paper acceptance to foster further research.

Country of Origin
🇺🇸 United States

Page Count
29 pages

Category
Computer Science:
Artificial Intelligence