Score: 1

MoS-VLA: A Vision-Language-Action Model with One-Shot Skill Adaptation

Published: October 18, 2025 | arXiv ID: 2510.16617v1

By: Ruihan Zhao , Tyler Ingebrand , Sandeep Chinchali and more

Potential Business Impact:

Robots learn new tasks with one example.

Business Areas:
Autonomous Vehicles Transportation

Vision-Language-Action (VLA) models trained on large robot datasets promise general-purpose, robust control across diverse domains and embodiments. However, existing approaches often fail out-of-the-box when deployed in novel environments, embodiments, or tasks. We introduce Mixture of Skills VLA (MoS-VLA), a framework that represents robot manipulation policies as linear combinations of a finite set of learned basis functions. During pretraining, MoS-VLA jointly learns these basis functions across datasets from the Open X-Embodiment project, producing a structured skill space. At test time, adapting to a new task requires only a single expert demonstration. The corresponding skill representation is then inferred via a lightweight convex optimization problem that minimizes the L1 action error, without requiring gradient updates. This gradient-free adaptation incurs minimal overhead while enabling rapid instantiation of new skills. Empirically, MoS-VLA achieves lower action-prediction error on five out of five unseen datasets and succeeds in both simulation and real-robot tasks where a pretrained VLA model fails outright. Project page: mos-vla.github.io/

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Robotics