Value Vision-Language-Action Planning & Search
By: Ali Salamatian , Ke , Ren and more
Potential Business Impact:
Helps robots learn to do tasks better and faster.
Vision-Language-Action (VLA) models have emerged as powerful generalist policies for robotic manipulation, yet they remain fundamentally limited by their reliance on behavior cloning, leading to brittleness under distribution shift. While augmenting pretrained models with test-time search algorithms like Monte Carlo Tree Search (MCTS) can mitigate these failures, existing formulations rely solely on the VLA prior for guidance, lacking a grounded estimate of expected future return. Consequently, when the prior is inaccurate, the planner can only correct action selection via the exploration term, which requires extensive simulation to become effective. To address this limitation, we introduce Value Vision-Language-Action Planning and Search (V-VLAPS), a framework that augments MCTS with a lightweight, learnable value function. By training a simple multilayer perceptron (MLP) on the latent representations of a fixed VLA backbone (Octo), we provide the search with an explicit success signal that biases action selection toward high-value regions. We evaluate V-VLAPS on the LIBERO robotic manipulation suite, demonstrating that our value-guided search improves success rates by over 5 percentage points while reducing the average number of MCTS simulations by 5-15 percent compared to baselines that rely only on the VLA prior.
Similar Papers
Improving Pre-Trained Vision-Language-Action Policies with Model-Based Search
Robotics
Robots learn to do tasks better by planning ahead.
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Robotics
Robots learn to see, talk, and do tasks.
Experiences from Benchmarking Vision-Language-Action Models for Robotic Manipulation
Robotics
Robots learn to do tasks better by watching and listening.