Score: 0

Survey of Vision-Language-Action Models for Embodied Manipulation

Published: August 21, 2025 | arXiv ID: 2508.15201v1

By: Haoran Li , Yuhui Chen , Wenbo Cui and more

Potential Business Impact:

Robots learn to do tasks by watching and acting.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Embodied intelligence systems, which enhance agent capabilities through continuous environment interactions, have garnered significant attention from both academia and industry. Vision-Language-Action models, inspired by advancements in large foundation models, serve as universal robotic control frameworks that substantially improve agent-environment interaction capabilities in embodied intelligence systems. This expansion has broadened application scenarios for embodied AI robots. This survey comprehensively reviews VLA models for embodied manipulation. Firstly, it chronicles the developmental trajectory of VLA architectures. Subsequently, we conduct a detailed analysis of current research across 5 critical dimensions: VLA model structures, training datasets, pre-training methods, post-training methods, and model evaluation. Finally, we synthesize key challenges in VLA development and real-world deployment, while outlining promising future research directions.

Page Count
31 pages

Category
Computer Science:
Robotics