VisualActBench: Can VLMs See and Act like a Human?
By: Daoan Zhang , Pai Liu , Xiaofei Zhou and more
Potential Business Impact:
Teaches computers to act smartly by just watching.
Vision-Language Models (VLMs) have achieved impressive progress in perceiving and describing visual environments. However, their ability to proactively reason and act based solely on visual inputs, without explicit textual prompts, remains underexplored. We introduce a new task, Visual Action Reasoning, and propose VisualActBench, a large-scale benchmark comprising 1,074 videos and 3,733 human-annotated actions across four real-world scenarios. Each action is labeled with an Action Prioritization Level (APL) and a proactive-reactive type to assess models' human-aligned reasoning and value sensitivity. We evaluate 29 VLMs on VisualActBench and find that while frontier models like GPT4o demonstrate relatively strong performance, a significant gap remains compared to human-level reasoning, particularly in generating proactive, high-priority actions. Our results highlight limitations in current VLMs' ability to interpret complex context, anticipate outcomes, and align with human decision-making frameworks. VisualActBench establishes a comprehensive foundation for assessing and improving the real-world readiness of proactive, vision-centric AI agents.
Similar Papers
Seeing is Believing (and Predicting): Context-Aware Multi-Human Behavior Prediction with Vision Language Models
CV and Pattern Recognition
Helps robots understand what many people will do.
Vision-Language Models Unlock Task-Centric Latent Actions
Machine Learning (CS)
Teaches robots to ignore distractions and learn better.
VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs
CV and Pattern Recognition
Tests if computers *really* see, not just guess.