Score: 0

GeoAware-VLA: Implicit Geometry Aware Vision-Language-Action Model

Published: September 17, 2025 | arXiv ID: 2509.14117v2

By: Ali Abouzeid , Malak Mansour , Zezhou Sun and more

Potential Business Impact:

Robots see better from new angles.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language-Action (VLA) models often fail to generalize to novel camera viewpoints, a limitation stemming from their difficulty in inferring robust 3D geometry from 2D images. We introduce GeoAware-VLA, a simple yet effective approach that enhances viewpoint invariance by integrating strong geometric priors into the vision backbone. Instead of training a visual encoder or relying on explicit 3D data, we leverage a frozen, pretrained geometric vision model as a feature extractor. A trainable projection layer then adapts these geometrically-rich features for the policy decoder, relieving it of the burden of learning 3D consistency from scratch. Through extensive evaluations on LIBERO benchmark subsets, we show GeoAware-VLA achieves substantial improvements in zero-shot generalization to novel camera poses, boosting success rates by over 2x in simulation. Crucially, these benefits translate to the physical world; our model shows a significant performance gain on a real robot, especially when evaluated from unseen camera angles. Our approach proves effective across both continuous and discrete action spaces, highlighting that robust geometric grounding is a key component for creating more generalizable robotic agents.

Country of Origin
🇦🇪 United Arab Emirates

Page Count
8 pages

Category
Computer Science:
Robotics