GLaD: Geometric Latent Distillation for Vision-Language-Action Models
By: Minghao Guo , Meng Cao , Jiachen Tao and more
Potential Business Impact:
Helps robots understand and move objects better.
Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations.
Similar Papers
GeoAware-VLA: Implicit Geometry Aware Vision-Language-Action Model
Robotics
Robots see better from new angles.
GeoAware-VLA: Implicit Geometry Aware Vision-Language-Action Model
Robotics
Robots see better from new angles.
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.