Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos
By: Yicheng Feng , Wanpeng Zhang , Ye Wang and more
Vision-Language-Action (VLA) models provide a promising paradigm for robot learning by integrating visual perception with language-guided policy learning. However, most existing approaches rely on 2D visual inputs to perform actions in 3D physical environments, creating a significant gap between perception and action grounding. To bridge this gap, we propose a Spatial-Aware VLA Pretraining paradigm that performs explicit alignment between visual space and physical space during pretraining, enabling models to acquire 3D spatial understanding before robot policy learning. Starting from pretrained vision-language models, we leverage large-scale human demonstration videos to extract 3D visual and 3D action annotations, forming a new source of supervision that aligns 2D visual observations with 3D spatial reasoning. We instantiate this paradigm with VIPA-VLA, a dual-encoder architecture that incorporates a 3D visual encoder to augment semantic visual representations with 3D-aware features. When adapted to downstream robot tasks, VIPA-VLA achieves significantly improved grounding between 2D vision and 3D action, resulting in more robust and generalizable robotic policies.
Similar Papers
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.
DepthVLA: Enhancing Vision-Language-Action Models with Depth-Aware Spatial Reasoning
CV and Pattern Recognition
Helps robots understand where things are better.