villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
By: Xiaoyu Chen , Hangxing Wei , Pushi Zhang and more
Potential Business Impact:
Teaches robots to do new tasks from words.
Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.
Similar Papers
villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
Robotics
Robots learn to do new tasks from simple instructions.
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Robotics
Robots learn new jobs by seeing and hearing.