Score: 0

QDepth-VLA: Quantized Depth Prediction as Auxiliary Supervision for Vision-Language-Action Models

Published: October 16, 2025 | arXiv ID: 2510.14836v1

By: Yixuan Li , Yuhui Chen , Mingcai Zhou and more

Potential Business Impact:

Helps robots understand 3D space for better tasks.

Business Areas:
Image Recognition Data and Analytics, Software

Spatial perception and reasoning are crucial for Vision-Language-Action (VLA) models to accomplish fine-grained manipulation tasks. However, existing approaches often lack the ability to understand and reason over the essential 3D structures necessary for precise control. To address this limitation, we propose QDepth-VLA, a general framework that augments VLA models with an auxiliary depth prediction task. A dedicated depth expert is designed to predict quantized latent tokens of depth maps obtained from a VQ-VAE encoder, enabling the model to learn depth-aware representations that capture critical geometric cues. Experimental results on the simulation benchmarks and real-world tasks demonstrate that QDepth-VLA yields strong spatial reasoning and competitive performance on manipulation tasks.

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition