VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
By: Yihao Wang , Pengxiang Ding , Lingxiao Li and more
Potential Business Impact:
Robots learn tasks faster with less training.
Vision-Language-Action (VLA) models typically bridge the gap between perceptual and action spaces by pre-training a large-scale Vision-Language Model (VLM) on robotic data. While this approach greatly enhances performance, it also incurs significant training costs. In this paper, we investigate how to effectively bridge vision-language (VL) representations to action (A). We introduce VLA-Adapter, a novel paradigm designed to reduce the reliance of VLA models on large-scale VLMs and extensive pre-training. To this end, we first systematically analyze the effectiveness of various VL conditions and present key findings on which conditions are essential for bridging perception and action spaces. Based on these insights, we propose a lightweight Policy module with Bridge Attention, which autonomously injects the optimal condition into the action space. In this way, our method achieves high performance using only a 0.5B-parameter backbone, without any robotic data pre-training. Extensive experiments on both simulated and real-world robotic benchmarks demonstrate that VLA-Adapter not only achieves state-of-the-art level performance, but also offers the fast inference speed reported to date. Furthermore, thanks to the proposed advanced bridging paradigm, VLA-Adapter enables the training of a powerful VLA model in just 8 hours on a single consumer-grade GPU, greatly lowering the barrier to deploying the VLA model. Project page: https://vla-adapter.github.io/.
Similar Papers
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
Robotics
Robots learn tasks faster and cheaper.
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
Machine Learning (CS)
Makes robots understand and do tasks from words.
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Robotics
Robots learn to see, talk, and do tasks.