Survey of Vision-Language-Action Models for Embodied Manipulation
By: Haoran Li , Yuhui Chen , Wenbo Cui and more
Potential Business Impact:
Robots learn to do tasks by watching and acting.
Embodied intelligence systems, which enhance agent capabilities through continuous environment interactions, have garnered significant attention from both academia and industry. Vision-Language-Action models, inspired by advancements in large foundation models, serve as universal robotic control frameworks that substantially improve agent-environment interaction capabilities in embodied intelligence systems. This expansion has broadened application scenarios for embodied AI robots. This survey comprehensively reviews VLA models for embodied manipulation. Firstly, it chronicles the developmental trajectory of VLA architectures. Subsequently, we conduct a detailed analysis of current research across 5 critical dimensions: VLA model structures, training datasets, pre-training methods, post-training methods, and model evaluation. Finally, we synthesize key challenges in VLA development and real-world deployment, while outlining promising future research directions.
Similar Papers
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.