UrbanVideo-Bench: Benchmarking Vision-Language Models on Embodied Intelligence with Video Data in Urban Spaces
By: Baining Zhao , Jianjie Fang , Zichao Dai and more
Potential Business Impact:
Lets robots learn to walk and see cities.
Large multimodal models exhibit remarkable intelligence, yet their embodied cognitive abilities during motion in open-ended urban 3D space remain to be explored. We introduce a benchmark to evaluate whether video-large language models (Video-LLMs) can naturally process continuous first-person visual observations like humans, enabling recall, perception, reasoning, and navigation. We have manually control drones to collect 3D embodied motion video data from real-world cities and simulated environments, resulting in 1.5k video clips. Then we design a pipeline to generate 5.2k multiple-choice questions. Evaluations of 17 widely-used Video-LLMs reveal current limitations in urban embodied cognition. Correlation analysis provides insight into the relationships between different tasks, showing that causal reasoning has a strong correlation with recall, perception, and navigation, while the abilities for counterfactual and associative reasoning exhibit lower correlation with other tasks. We also validate the potential for Sim-to-Real transfer in urban embodiment through fine-tuning.
Similar Papers
NavBench: Probing Multimodal Large Language Models for Embodied Navigation
CV and Pattern Recognition
Helps robots understand and move in new places.
How Well Do Vision--Language Models Understand Cities? A Comparative Study on Spatial Reasoning from Street-View Images
CV and Pattern Recognition
Helps computers understand city streets better.
Do Vision-Language Models See Urban Scenes as People Do? An Urban Perception Benchmark
CV and Pattern Recognition
Helps AI understand city pictures like people do.