Score: 0

Continually Evolving Skill Knowledge in Vision Language Action Model

Published: November 22, 2025 | arXiv ID: 2511.18085v1

By: Yuxuan Wu , Guangming Wang , Zhiheng Yang and more

Potential Business Impact:

Robots learn new jobs without needing lots of new training.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Developing general robot intelligence in open environments requires continual skill learning. Recent Vision-Language-Action (VLA) models leverage massive pretraining data to support diverse manipulation tasks, but they still depend heavily on task-specific fine-tuning, revealing a lack of continual learning capability. Existing continual learning methods are also resource-intensive to scale to VLA models. We propose Stellar VLA, a knowledge-driven continual learning framework with two variants: T-Stellar, modeling task-centric knowledge space, and TS-Stellar, capturing hierarchical task-skill structure. Stellar VLA enables self-supervised knowledge evolution through joint learning of task latent representation and the knowledge space, reducing annotation needs. Knowledge-guided expert routing provide task specialization without extra network parameters, lowering training overhead.Experiments on the LIBERO benchmark and real-world tasks show over 50 percentage average improvement in final success rates relative to baselines. TS-Stellar further excels in complex action inference, and in-depth analyses verify effective knowledge retention and discovery. Our code will be released soon.

Page Count
16 pages

Category
Computer Science:
Robotics