CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion
By: Ralf Römer, Yi Zhang, Angela P. Schoellig
To teach robots complex manipulation tasks, it is now a common practice to fine-tune a pre-trained vision-language-action model (VLA) on task-specific data. However, since this recipe updates existing representations, it is unsuitable for long-term operation in the real world, where robots must continually adapt to new tasks and environments while retaining the knowledge they have already acquired. Existing continual learning methods for robotics commonly require storing previous data (exemplars), struggle with long task sequences, or rely on task identifiers for deployment. To address these limitations, we propose CLARE, a general, parameter-efficient framework for exemplar-free continual learning with VLAs. CLARE introduces lightweight modular adapters into selected feedforward layers and autonomously expands the model only where necessary when learning a new task, guided by layer-wise feature similarity. During deployment, an autoencoder-based routing mechanism dynamically activates the most relevant adapters without requiring task labels. Through extensive experiments on the LIBERO benchmark, we show that CLARE achieves high performance on new tasks without catastrophic forgetting of earlier tasks, significantly outperforming even exemplar-based methods. Code and data are available at https://tum-lsy.github.io/clare.
Similar Papers
Continually Evolving Skill Knowledge in Vision Language Action Model
Robotics
Robots learn new skills without constant retraining.
Continually Evolving Skill Knowledge in Vision Language Action Model
Robotics
Robots learn new jobs without needing lots of new training.
ExpReS-VLA: Specializing Vision-Language-Action Models Through Experience Replay and Retrieval
Robotics
Makes robots learn new jobs faster and better.