Score: 1

villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models

Published: July 31, 2025 | arXiv ID: 2507.23682v1

By: Xiaoyu Chen , Hangxing Wei , Pushi Zhang and more

Potential Business Impact:

Teaches robots to do new tasks from words.

Business Areas:
Autonomous Vehicles Transportation

Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Robotics