Score: 1

villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models

Published: July 31, 2025 | arXiv ID: 2507.23682v3

By: Xiaoyu Chen , Hangxing Wei , Pushi Zhang and more

Potential Business Impact:

Robots learn to do new tasks from simple instructions.

Business Areas:
Autonomous Vehicles Transportation

Vision-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent works have begun to explore the incorporation of latent actions, abstract representations of motion between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Vision-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. We demonstrate that villa-X can generate latent action plans in a zero-shot fashion, even for unseen embodiments and open-vocabulary symbolic understanding. This capability enables villa-X to achieve superior performance across diverse simulation tasks in SIMPLER and on two real-world robotic setups involving both gripper and dexterous hand manipulation. These results establish villa-X as a principled and scalable paradigm for learning generalizable robot manipulation policies. We believe it provides a strong foundation for future research.

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Robotics