Score: 0

VGGT-DP: Generalizable Robot Control via Vision Foundation Models

Published: September 23, 2025 | arXiv ID: 2509.18778v1

By: Shijia Ge , Yinxin Zhang , Shuzhao Xie and more

Potential Business Impact:

Robots learn to do tasks by watching.

Business Areas:
Image Recognition Data and Analytics, Software

Visual imitation learning frameworks allow robots to learn manipulation skills from expert demonstrations. While existing approaches mainly focus on policy design, they often neglect the structure and capacity of visual encoders, limiting spatial understanding and generalization. Inspired by biological vision systems, which rely on both visual and proprioceptive cues for robust control, we propose VGGT-DP, a visuomotor policy framework that integrates geometric priors from a pretrained 3D perception model with proprioceptive feedback. We adopt the Visual Geometry Grounded Transformer (VGGT) as the visual encoder and introduce a proprioception-guided visual learning strategy to align perception with internal robot states, improving spatial grounding and closed-loop control. To reduce inference latency, we design a frame-wise token reuse mechanism that compacts multi-view tokens into an efficient spatial representation. We further apply random token pruning to enhance policy robustness and reduce overfitting. Experiments on challenging MetaWorld tasks show that VGGT-DP significantly outperforms strong baselines such as DP and DP3, particularly in precision-critical and long-horizon scenarios.

Country of Origin
🇨🇳 China

Page Count
9 pages

Category
Computer Science:
Robotics