Learning to Act Robustly with View-Invariant Latent Actions
By: Youngjoon Jeong, Junha Chun, Taesup Kim
Potential Business Impact:
Robots see the same thing from any angle.
Vision-based robotic policies often struggle with even minor viewpoint changes, underscoring the need for view-invariant visual representations. This challenge becomes more pronounced in real-world settings, where viewpoint variability is unavoidable and can significantly disrupt policy performance. Existing methods typically learn invariance from multi-view observations at the scene level, but such approaches rely on visual appearance and fail to incorporate the physical dynamics essential for robust generalization. We propose View-Invariant Latent Action (VILA), which models a latent action capturing transition patterns across trajectories to learn view-invariant representations grounded in physical dynamics. VILA aligns these latent actions across viewpoints using an action-guided objective based on ground-truth action sequences. Experiments in both simulation and the real world show that VILA-based policies generalize effectively to unseen viewpoints and transfer well to new tasks, establishing VILA as a strong pretraining framework that improves robustness and downstream learning performance.
Similar Papers
villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
Robotics
Teaches robots to do new tasks from words.
villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
Robotics
Robots learn to do new tasks from simple instructions.
LatBot: Distilling Universal Latent Actions for Vision-Language-Action Models
Robotics
Teaches robots to do new jobs with little practice.