Grounding Actions in Camera Space: Observation-Centric Vision-Language-Action Policy
By: Tianyi Zhang , Haonan Duan , Haoran Hao and more
Potential Business Impact:
Robots see better, act smarter, and learn faster.
Vision-Language-Action (VLA) models frequently encounter challenges in generalizing to real-world environments due to inherent discrepancies between observation and action spaces. Although training data are collected from diverse camera perspectives, the models typically predict end-effector poses within the robot base coordinate frame, resulting in spatial inconsistencies. To mitigate this limitation, we introduce the Observation-Centric VLA (OC-VLA) framework, which grounds action predictions directly in the camera observation space. Leveraging the camera's extrinsic calibration matrix, OC-VLA transforms end-effector poses from the robot base coordinate system into the camera coordinate system, thereby unifying prediction targets across heterogeneous viewpoints. This lightweight, plug-and-play strategy ensures robust alignment between perception and action, substantially improving model resilience to camera viewpoint variations. The proposed approach is readily compatible with existing VLA architectures, requiring no substantial modifications. Comprehensive evaluations on both simulated and real-world robotic manipulation tasks demonstrate that OC-VLA accelerates convergence, enhances task success rates, and improves cross-view generalization. The code will be publicly available.
Similar Papers
cVLA: Towards Efficient Camera-Space VLAs
Robotics
Teaches robots to do tasks by seeing and understanding.
GeoAware-VLA: Implicit Geometry Aware Vision-Language-Action Model
Robotics
Robots see better from new angles.
OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation
Robotics
Robots follow spoken instructions in new places.