Score: 1

VisualActBench: Can VLMs See and Act like a Human?

Published: December 10, 2025 | arXiv ID: 2512.09907v1

By: Daoan Zhang , Pai Liu , Xiaofei Zhou and more

BigTech Affiliations: Meta

Potential Business Impact:

Teaches computers to act smartly by just watching.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vision-Language Models (VLMs) have achieved impressive progress in perceiving and describing visual environments. However, their ability to proactively reason and act based solely on visual inputs, without explicit textual prompts, remains underexplored. We introduce a new task, Visual Action Reasoning, and propose VisualActBench, a large-scale benchmark comprising 1,074 videos and 3,733 human-annotated actions across four real-world scenarios. Each action is labeled with an Action Prioritization Level (APL) and a proactive-reactive type to assess models' human-aligned reasoning and value sensitivity. We evaluate 29 VLMs on VisualActBench and find that while frontier models like GPT4o demonstrate relatively strong performance, a significant gap remains compared to human-level reasoning, particularly in generating proactive, high-priority actions. Our results highlight limitations in current VLMs' ability to interpret complex context, anticipate outcomes, and align with human decision-making frameworks. VisualActBench establishes a comprehensive foundation for assessing and improving the real-world readiness of proactive, vision-centric AI agents.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition