Mechanistic Finetuning of Vision-Language-Action Models via Few-Shot Demonstrations
By: Chancharik Mitra , Yusen Luo , Raj Saravanan and more
Potential Business Impact:
Teaches robots to do new jobs with few examples.
Vision-Language Action (VLAs) models promise to extend the remarkable success of vision-language models (VLMs) to robotics. Yet, unlike VLMs in the vision-language domain, VLAs for robotics require finetuning to contend with varying physical factors like robot embodiment, environment characteristics, and spatial relationships of each task. Existing fine-tuning methods lack specificity, adapting the same set of parameters regardless of a task's visual, linguistic, and physical characteristics. Inspired by functional specificity in neuroscience, we hypothesize that it is more effective to finetune sparse model representations specific to a given task. In this work, we introduce Robotic Steering, a finetuning approach grounded in mechanistic interpretability that leverages few-shot demonstrations to identify and selectively finetune task-specific attention heads aligned with the physical, visual, and linguistic requirements of robotic tasks. Through comprehensive on-robot evaluations with a Franka Emika robot arm, we demonstrate that Robotic Steering outperforms LoRA while achieving superior robustness under task variation, reduced computational cost, and enhanced interpretability for adapting VLAs to diverse robotic tasks.
Similar Papers
Mechanistic interpretability for steering vision-language-action models
Robotics
Controls robots by understanding words and sights.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.
Enhancing Generalization in Vision-Language-Action Models by Preserving Pretrained Representations
Robotics
Robots learn to do new jobs by watching and reading.