Towards Deploying VLA without Fine-Tuning: Plug-and-Play Inference-Time VLA Policy Steering via Embodied Evolutionary Diffusion
By: Zhuo Li , Junjia Liu , Zhipeng Dong and more
Potential Business Impact:
Robots follow instructions better without retraining.
Vision-Language-Action (VLA) models have demonstrated significant potential in real-world robotic manipulation. However, pre-trained VLA policies still suffer from substantial performance degradation during downstream deployment. Although fine-tuning can mitigate this issue, its reliance on costly demonstration collection and intensive computation makes it impractical in real-world settings. In this work, we introduce VLA-Pilot, a plug-and-play inference-time policy steering method for zero-shot deployment of pre-trained VLA without any additional fine-tuning or data collection. We evaluate VLA-Pilot on six real-world downstream manipulation tasks across two distinct robotic embodiments, encompassing both in-distribution and out-of-distribution scenarios. Experimental results demonstrate that VLA-Pilot substantially boosts the success rates of off-the-shelf pre-trained VLA policies, enabling robust zero-shot generalization to diverse tasks and embodiments. Experimental videos and code are available at: https://rip4kobe.github.io/vla-pilot/.
Similar Papers
On-the-Fly VLA Adaptation via Test-Time Reinforcement Learning
Robotics
Robots learn to do new tasks by themselves.
On-the-Fly VLA Adaptation via Test-Time Reinforcement Learning
Robotics
Robots learn to do new tasks by themselves.
EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models
Robotics
Robots learn new skills by practicing, not just copying.