SA-VLA: Spatially-Aware Flow-Matching for Vision-Language-Action Reinforcement Learning
By: Xu Pan , Zhenglin Wan , Xingrui Yu and more
Potential Business Impact:
Robots learn to do tasks better in new places.
Vision-Language-Action (VLA) models exhibit strong generalization in robotic manipulation, yet reinforcement learning (RL) fine-tuning often degrades robustness under spatial distribution shifts. For flow-matching VLA policies, this degradation is closely associated with the erosion of spatial inductive bias during RL adaptation, as sparse rewards and spatially agnostic exploration increasingly favor short-horizon visual cues. To address this issue, we propose \textbf{SA-VLA}, a spatially-aware RL adaptation framework that preserves spatial grounding during policy optimization by aligning representation learning, reward design, and exploration with task geometry. SA-VLA fuses implicit spatial representations with visual tokens, provides dense rewards that reflect geometric progress, and employs \textbf{SCAN}, a spatially-conditioned annealed exploration strategy tailored to flow-matching dynamics. Across challenging multi-object and cluttered manipulation benchmarks, SA-VLA enables stable RL fine-tuning and improves zero-shot spatial generalization, yielding more robust and transferable behaviors. Code and project page are available at https://xupan.top/Projects/savla.
Similar Papers
Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos
Robotics
Helps robots understand 3D space to do tasks.
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model
Robotics
Robots learn to do new tasks without practice.
DepthVLA: Enhancing Vision-Language-Action Models with Depth-Aware Spatial Reasoning
CV and Pattern Recognition
Helps robots understand where things are better.