Score: 1

[De|Re]constructing VLMs' Reasoning in Counting

Published: October 22, 2025 | arXiv ID: 2510.19555v1

By: Simone Alghisi , Gabriel Roccabruna , Massimo Rizzoli and more

Potential Business Impact:

Makes computers count objects better in pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have recently gained attention due to their competitive performance on multiple downstream tasks, achieved by following user-input instructions. However, VLMs still exhibit several limitations in visual reasoning, such as difficulties in identifying relations (e.g., spatial, temporal, and among objects), understanding temporal sequences (e.g., frames), and counting objects. In this work, we go beyond score-level benchmark evaluations of VLMs by investigating the underlying causes of their failures and proposing a targeted approach to improve their reasoning capabilities. We study the reasoning skills of seven state-of-the-art VLMs in the counting task under controlled experimental conditions. Our experiments show that VLMs are highly sensitive to the number and type of objects, their spatial arrangement, and the co-occurrence of distractors. A layer-wise analysis reveals that errors are due to incorrect mapping of the last-layer representation into the output space. Our targeted training shows that fine-tuning just the output layer improves accuracy by up to 21%. We corroborate these findings by achieving consistent improvements on real-world datasets.

Country of Origin
🇮🇹 Italy

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition