EdgeVLA: Efficient Vision-Language-Action Models
By: Paweł Budzianowski , Wesley Maa , Matthew Freed and more
Potential Business Impact:
Makes robots understand and move faster.
Vision-Language Models (VLMs) have emerged as a promising approach to address the data scarcity challenge in robotics, enabling the development of generalizable visuomotor control policies. While models like OpenVLA showcase the potential of this paradigm, deploying large-scale VLMs on resource-constrained mobile manipulation systems remains a significant hurdle. This paper introduces Edge VLA (EVLA), a novel approach designed to significantly enhance the inference speed of Vision-Language-Action (VLA) models. EVLA maintains the representational power of these models while enabling real-time performance on edge devices. We achieve this through two key innovations: 1) Eliminating the autoregressive requirement for end-effector position prediction, leading to a 7x speedup in inference, and 2) Leveraging the efficiency of Small Language Models (SLMs), demonstrating comparable training performance to larger models with significantly reduced computational demands. Our early results demonstrate that EVLA achieves comparable training characteristics to OpenVLA while offering substantial gains in inference speed and memory efficiency. We release our model checkpoints and training \href{https://github.com/kscalelabs/evla }{codebase} to foster further research.
Similar Papers
cVLA: Towards Efficient Camera-Space VLAs
Robotics
Teaches robots to do tasks by seeing and understanding.
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
Machine Learning (CS)
Makes robots understand and do tasks from words.
EvoVLA: Self-Evolving Vision-Language-Action Model
CV and Pattern Recognition
Robots learn to do long, tricky jobs better.