Score: 0

Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey

Published: October 20, 2025 | arXiv ID: 2510.17111v1

By: Weifan Guan , Qinghao Hu , Aosheng Li and more

Potential Business Impact:

Makes robots understand and do tasks faster.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Vision-Language-Action (VLA) models extend vision-language models to embodied control by mapping natural-language instructions and visual observations to robot actions. Despite their capabilities, VLA systems face significant challenges due to their massive computational and memory demands, which conflict with the constraints of edge platforms such as on-board mobile manipulators that require real-time performance. Addressing this tension has become a central focus of recent research. In light of the growing efforts toward more efficient and scalable VLA systems, this survey provides a systematic review of approaches for improving VLA efficiency, with an emphasis on reducing latency, memory footprint, and training and inference costs. We categorize existing solutions into four dimensions: model architecture, perception feature, action generation, and training/inference strategies, summarizing representative techniques within each category. Finally, we discuss future trends and open challenges, highlighting directions for advancing efficient embodied intelligence.

Page Count
25 pages

Category
Computer Science:
Robotics