Assessing the Visual Enumeration Abilities of Specialized Counting Architectures and Vision-Language Models
By: Kuinan Hou , Jing Mi , Marco Zorzi and more
Potential Business Impact:
Lets computers count objects in pictures.
Counting the number of items in a visual scene remains a fundamental yet challenging task in computer vision. Traditional approaches to solving this problem rely on domain-specific counting architectures, which are trained using datasets annotated with a predefined set of object categories. However, recent progress in creating large-scale multimodal vision-language models (VLMs) suggests that these domain-general architectures may offer a flexible alternative for open-set object counting. In this study, we therefore systematically compare the performance of state-of-the-art specialized counting architectures against VLMs on two popular counting datasets, as well as on a novel benchmark specifically created to have a finer-grained control over the visual properties of test images. Our findings show that most VLMs can approximately enumerate the number of items in a visual scene, matching or even surpassing the performance of specialized computer vision architectures. Notably, enumeration accuracy significantly improves when VLMs are prompted to generate intermediate representations (i.e., locations and verbal labels) of each object to be counted. Nevertheless, none of the models can reliably count the number of objects in complex visual scenes, showing that further research is still needed to create AI systems that can reliably deploy counting procedures in realistic environments.
Similar Papers
[De|Re]constructing VLMs' Reasoning in Counting
CV and Pattern Recognition
Makes computers count objects better in pictures.
Your Vision-Language Model Can't Even Count to 20: Exposing the Failures of VLMs in Compositional Counting
CV and Pattern Recognition
AI struggles to count mixed objects accurately.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.