Enhancing Generalization in Vision-Language-Action Models by Preserving Pretrained Representations
By: Shresth Grover , Akshay Gopalkrishnan , Bo Ai and more
Potential Business Impact:
Robots learn to do new jobs by watching and reading.
Vision-language-action (VLA) models finetuned from vision-language models (VLMs) hold the promise of leveraging rich pretrained representations to build generalist robots across diverse tasks and environments. However, direct fine-tuning on robot data often disrupts these representations and limits generalization. We present a framework that better preserves pretrained features while adapting them for robot manipulation. Our approach introduces three components: (i) a dual-encoder design with one frozen vision encoder to retain pretrained features and another trainable for task adaptation, (ii) a string-based action tokenizer that casts continuous actions into character sequences aligned with the model's pretraining domain, and (iii) a co-training strategy that combines robot demonstrations with vision-language datasets emphasizing spatial reasoning and affordances. Evaluations in simulation and on real robots show that our method improves robustness to visual perturbations, generalization to novel instructions and environments, and overall task success compared to baselines.
Similar Papers
Enhancing Generalization in Vision-Language-Action Models by Preserving Pretrained Representations
Robotics
Robots learn to do new jobs by watching and reading.
From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
Robotics
Robots learn to do more tasks with better instructions.
Don't Blind Your VLA: Aligning Visual Representations for OOD Generalization
Machine Learning (CS)
Keeps robots smart when learning new tasks.