On-the-Fly VLA Adaptation via Test-Time Reinforcement Learning
By: Changyu Liu , Yiyang Liu , Taowen Wang and more
Potential Business Impact:
Robots learn to do new tasks by themselves.
Vision-Language-Action models have recently emerged as a powerful paradigm for general-purpose robot learning, enabling agents to map visual observations and natural-language instructions into executable robotic actions. Though popular, they are primarily trained via supervised fine-tuning or training-time reinforcement learning, requiring explicit fine-tuning phases, human interventions, or controlled data collection. Consequently, existing methods remain unsuitable for challenging simulated- or physical-world deployments, where robots must respond autonomously and flexibly to evolving environments. To address this limitation, we introduce a Test-Time Reinforcement Learning for VLAs (TT-VLA), a framework that enables on-the-fly policy adaptation during inference. TT-VLA formulates a dense reward mechanism that leverages step-by-step task-progress signals to refine action policies during test time while preserving the SFT/RL-trained priors, making it an effective supplement to current VLA models. Empirical results show that our approach enhances overall adaptability, stability, and task success in dynamic, previously unseen scenarios under simulated and real-world settings. We believe TT-VLA offers a principled step toward self-improving, deployment-ready VLAs.
Similar Papers
EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models
Robotics
Robots learn new skills by practicing, not just copying.
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach
Robotics
Makes robots learn and do tasks better.
Reflection-Based Task Adaptation for Self-Improving VLA
Robotics
Robots learn new tasks faster by fixing mistakes.