Score: 0

Mechanistic Finetuning of Vision-Language-Action Models via Few-Shot Demonstrations

Published: November 27, 2025 | arXiv ID: 2511.22697v1

By: Chancharik Mitra , Yusen Luo , Raj Saravanan and more

Potential Business Impact:

Teaches robots to do new jobs with few examples.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Vision-Language Action (VLAs) models promise to extend the remarkable success of vision-language models (VLMs) to robotics. Yet, unlike VLMs in the vision-language domain, VLAs for robotics require finetuning to contend with varying physical factors like robot embodiment, environment characteristics, and spatial relationships of each task. Existing fine-tuning methods lack specificity, adapting the same set of parameters regardless of a task's visual, linguistic, and physical characteristics. Inspired by functional specificity in neuroscience, we hypothesize that it is more effective to finetune sparse model representations specific to a given task. In this work, we introduce Robotic Steering, a finetuning approach grounded in mechanistic interpretability that leverages few-shot demonstrations to identify and selectively finetune task-specific attention heads aligned with the physical, visual, and linguistic requirements of robotic tasks. Through comprehensive on-robot evaluations with a Franka Emika robot arm, we demonstrate that Robotic Steering outperforms LoRA while achieving superior robustness under task variation, reduced computational cost, and enhanced interpretability for adapting VLAs to diverse robotic tasks.

Page Count
17 pages

Category
Computer Science:
Robotics