Do Vision-Language Models See Urban Scenes as People Do? An Urban Perception Benchmark
By: Rashid Mushkani
Potential Business Impact:
Helps AI understand city pictures like people do.
Understanding how people read city scenes can inform design and planning. We introduce a small benchmark for testing vision-language models (VLMs) on urban perception using 100 Montreal street images, evenly split between photographs and photorealistic synthetic scenes. Twelve participants from seven community groups supplied 230 annotation forms across 30 dimensions mixing physical attributes and subjective impressions. French responses were normalized to English. We evaluated seven VLMs in a zero-shot setup with a structured prompt and deterministic parser. We use accuracy for single-choice items and Jaccard overlap for multi-label items; human agreement uses Krippendorff's alpha and pairwise Jaccard. Results suggest stronger model alignment on visible, objective properties than subjective appraisals. The top system (claude-sonnet) reaches macro 0.31 and mean Jaccard 0.48 on multi-label items. Higher human agreement coincides with better model scores. Synthetic images slightly lower scores. We release the benchmark, prompts, and harness for reproducible, uncertainty-aware evaluation in participatory urban analysis.
Similar Papers
How Well Do Vision--Language Models Understand Cities? A Comparative Study on Spatial Reasoning from Street-View Images
CV and Pattern Recognition
Helps computers understand city streets better.
Towards General Urban Monitoring with Vision-Language Models: A Review, Evaluation, and a Research Agenda
CV and Pattern Recognition
Lets computers see city problems like people.
Not There Yet: Evaluating Vision Language Models in Simulating the Visual Perception of People with Low Vision
CV and Pattern Recognition
Helps computers understand how people with poor vision see.