RobustVLA: Robustness-Aware Reinforcement Post-Training for Vision-Language-Action Models
By: Hongyin Zhang , Shuo Zhang , Junxi Jin and more
Potential Business Impact:
Makes robots work better even when things go wrong.
Vision-Language-Action (VLA) models have recently emerged as powerful general-purpose policies for robotic manipulation, benefiting from large-scale multi-modal pre-training. However, they often fail to generalize reliably in out-of-distribution deployments, where unavoidable disturbances such as observation noise, sensor errors, or actuation perturbations become prevalent. While recent Reinforcement Learning (RL)-based post-training provides a practical means to adapt pre-trained VLA models, existing methods mainly emphasize reward maximization and overlook robustness to environmental uncertainty. In this work, we introduce RobustVLA, a lightweight online RL post-training method designed to explicitly enhance the resilience of VLA models. Through a systematic robustness analysis, we identify two key regularizations: Jacobian regularization, which mitigates sensitivity to observation noise, and smoothness regularization, which stabilizes policies under action perturbations. Extensive experiments across diverse robotic environments demonstrate that RobustVLA significantly outperforms prior state-of-the-art methods in robustness and reliability. Our results highlight the importance of principled robustness-aware RL post-training as a key step toward improving the reliability and robustness of VLA models.
Similar Papers
Reinforcing Action Policies by Prophesying
Robotics
Teaches robots to learn new tasks faster.
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Robotics
Robots learn to do new tasks better with less data.
VLA-R1: Enhancing Reasoning in Vision-Language-Action Models
CV and Pattern Recognition
Teaches robots to think and do tasks.