Stable Language Guidance for Vision-Language-Action Models
By: Zhihao Zhan , Yuhao Chen , Jiaying Zhou and more
Potential Business Impact:
Robots understand what you mean, not just what you say.
Vision-Language-Action (VLA) models have demonstrated impressive capabilities in generalized robotic control; however, they remain notoriously brittle to linguistic perturbations. We identify a critical ``modality collapse'' phenomenon where strong visual priors overwhelm sparse linguistic signals, causing agents to overfit to specific instruction phrasings while ignoring the underlying semantic intent. To address this, we propose \textbf{Residual Semantic Steering (RSS)}, a probabilistic framework that disentangles physical affordance from semantic execution. RSS introduces two theoretical innovations: (1) \textbf{Monte Carlo Syntactic Integration}, which approximates the true semantic posterior via dense, LLM-driven distributional expansion, and (2) \textbf{Residual Affordance Steering}, a dual-stream decoding mechanism that explicitly isolates the causal influence of language by subtracting the visual affordance prior. Theoretical analysis suggests that RSS effectively maximizes the mutual information between action and intent while suppressing visual distractors. Empirical results across diverse manipulation benchmarks demonstrate that RSS achieves state-of-the-art robustness, maintaining performance even under adversarial linguistic perturbations.
Similar Papers
Mechanistic interpretability for steering vision-language-action models
Robotics
Controls robots by understanding words and sights.
Do What You Say: Steering Vision-Language-Action Models via Runtime Reasoning-Action Alignment Verification
Robotics
Helps robots follow instructions better, even when things change.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.