Score: 0

Visual-Geometry Diffusion Policy: Robust Generalization via Complementarity-Aware Multimodal Fusion

Published: November 27, 2025 | arXiv ID: 2511.22445v1

By: Yikai Tang , Haoran Geng , Sheng Zang and more

Potential Business Impact:

Teaches robots to learn new tasks by watching.

Business Areas:
Image Recognition Data and Analytics, Software

Imitation learning has emerged as a crucial ap proach for acquiring visuomotor skills from demonstrations, where designing effective observation encoders is essential for policy generalization. However, existing methods often struggle to generalize under spatial and visual randomizations, instead tending to overfit. To address this challenge, we propose Visual Geometry Diffusion Policy (VGDP), a multimodal imitation learning framework built around a Complementarity-Aware Fusion Module where modality-wise dropout enforces balanced use of RGB and point-cloud cues, with cross-attention serving only as a lightweight interaction layer. Our experiments show that the expressiveness of the fused latent space is largely induced by the enforced complementarity from modality-wise dropout, with cross-attention serving primarily as a lightweight interaction mechanism rather than the main source of robustness. Across a benchmark of 18 simulated tasks and 4 real-world tasks, VGDP outperforms seven baseline policies with an average performance improvement of 39.1%. More importantly, VGDP demonstrates strong robustness under visual and spatial per turbations, surpassing baselines with an average improvement of 41.5% in different visual conditions and 15.2% in different spatial settings.

Page Count
10 pages

Category
Computer Science:
Robotics