DepthVLA: Enhancing Vision-Language-Action Models with Depth-Aware Spatial Reasoning
By: Tianyuan Yuan , Yicheng Liu , Chenhao Lu and more
Potential Business Impact:
Helps robots understand where things are better.
Vision-Language-Action (VLA) models have recently shown impressive generalization and language-guided manipulation capabilities. However, their performance degrades on tasks requiring precise spatial reasoning due to limited spatial reasoning inherited from Vision-Language Models (VLMs). Existing VLAs rely on extensive action-data pretraining to ground VLMs in 3D space, which reduces training efficiency and is still insufficient for accurate spatial understanding. In this work, we present DepthVLA, a simple yet effective VLA architecture that explicitly incorporates spatial awareness through a pretrained depth prediction module. DepthVLA adopts a mixture-of-transformers design that unifies a VLM, a depth transformer, and an action expert with fully shared attentions, forming an end-to-end model with enhanced spatial reasoning. Extensive evaluations in both real-world and simulated environments show that DepthVLA outperforms state-of-the-art approaches, achieving 78.5% vs. 65.0% progress in real-world tasks, 94.9% vs. 93.6% in the LIBERO simulator, and 74.8% vs. 58.8% in the Simpler simulator. Our code will be made publicly available.
Similar Papers
QDepth-VLA: Quantized Depth Prediction as Auxiliary Supervision for Vision-Language-Action Models
CV and Pattern Recognition
Helps robots understand 3D space for better tasks.
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.
DualVLA: Building a Generalizable Embodied Agent via Partial Decoupling of Reasoning and Action
CV and Pattern Recognition
Teaches robots to act and think better.