Score: 1

GLaD: Geometric Latent Distillation for Vision-Language-Action Models

Published: December 10, 2025 | arXiv ID: 2512.09619v1

By: Minghao Guo , Meng Cao , Jiachen Tao and more

Potential Business Impact:

Helps robots understand and move objects better.

Business Areas:
Image Recognition Data and Analytics, Software

Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¦πŸ‡ͺ United Arab Emirates, United States

Page Count
12 pages

Category
Computer Science:
Robotics