Continually Evolving Skill Knowledge in Vision Language Action Model
By: Yuxuan Wu , Guangming Wang , Zhiheng Yang and more
Potential Business Impact:
Robots learn new jobs without needing lots of new training.
Developing general robot intelligence in open environments requires continual skill learning. Recent Vision-Language-Action (VLA) models leverage massive pretraining data to support diverse manipulation tasks, but they still depend heavily on task-specific fine-tuning, revealing a lack of continual learning capability. Existing continual learning methods are also resource-intensive to scale to VLA models. We propose Stellar VLA, a knowledge-driven continual learning framework with two variants: T-Stellar, modeling task-centric knowledge space, and TS-Stellar, capturing hierarchical task-skill structure. Stellar VLA enables self-supervised knowledge evolution through joint learning of task latent representation and the knowledge space, reducing annotation needs. Knowledge-guided expert routing provide task specialization without extra network parameters, lowering training overhead.Experiments on the LIBERO benchmark and real-world tasks show over 50 percentage average improvement in final success rates relative to baselines. TS-Stellar further excels in complex action inference, and in-depth analyses verify effective knowledge retention and discovery. Our code will be released soon.
Similar Papers
Continually Evolving Skill Knowledge in Vision Language Action Model
Robotics
Robots learn new skills without constant retraining.
See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations
Robotics
Robots learn new tasks from just one video.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.