cVLA: Towards Efficient Camera-Space VLAs
By: Max Argus , Jelena Bratulic , Houman Masnavi and more
Potential Business Impact:
Teaches robots to do tasks by seeing and understanding.
Vision-Language-Action (VLA) models offer a compelling framework for tackling complex robotic manipulation tasks, but they are often expensive to train. In this paper, we propose a novel VLA approach that leverages the competitive performance of Vision Language Models (VLMs) on 2D images to directly infer robot end-effector poses in image frame coordinates. Unlike prior VLA models that output low-level controls, our model predicts trajectory waypoints, making it both more efficient to train and robot embodiment agnostic. Despite its lightweight design, our next-token prediction architecture effectively learns meaningful and executable robot trajectories. We further explore the underutilized potential of incorporating depth images, inference-time techniques such as decoding strategies, and demonstration-conditioned action generation. Our model is trained on a simulated dataset and exhibits strong sim-to-real transfer capabilities. We evaluate our approach using a combination of simulated and real data, demonstrating its effectiveness on a real robotic system.
Similar Papers
EdgeVLA: Efficient Vision-Language-Action Models
Robotics
Makes robots understand and move faster.
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
Machine Learning (CS)
Makes robots understand and do tasks from words.
Vision Language Action Models in Robotic Manipulation: A Systematic Review
Robotics
Robots understand what you say and see.