Object Counting with GPT-4o and GPT-5: A Comparative Study
By: Richard Füzesséry , Kaziwa Saleh , Sándor Szénási and more
Potential Business Impact:
Lets computers count things they've never seen.
Zero-shot object counting attempts to estimate the number of object instances belonging to novel categories that the vision model performing the counting has never encountered during training. Existing methods typically require large amount of annotated data and often require visual exemplars to guide the counting process. However, large language models (LLMs) are powerful tools with remarkable reasoning and data understanding abilities, which suggest the possibility of utilizing them for counting tasks without any supervision. In this work we aim to leverage the visual capabilities of two multi-modal LLMs, GPT-4o and GPT-5, to perform object counting in a zero-shot manner using only textual prompts. We evaluate both models on the FSC-147 and CARPK datasets and provide a comparative analysis. Our findings show that the models achieve performance comparable to the state-of-the-art zero-shot approaches on FSC-147, in some cases, even surpass them.
Similar Papers
SDVPT: Semantic-Driven Visual Prompt Tuning for Open-World Object Counting
CV and Pattern Recognition
Teaches computers to count anything in pictures.
CountZES: Counting via Zero-Shot Exemplar Selection
CV and Pattern Recognition
Counts things it's never seen before.
Counting Through Occlusion: Framework for Open World Amodal Counting
CV and Pattern Recognition
Counts hidden objects even when they are blocked.