VLA^2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation
By: Han Zhao , Jiaxuan Zhang , Wenxuan Song and more
Potential Business Impact:
Helps robots learn to grab new things.
Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multi-task capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA^2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current state-of-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA^2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla-2.github.io.
Similar Papers
EvoVLA: Self-Evolving Vision-Language-Action Model
CV and Pattern Recognition
Robots learn to do long, tricky jobs better.
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Robotics
Robots learn new jobs by seeing and hearing.
iFlyBot-VLA Technical Report
CV and Pattern Recognition
Robots learn to do tasks by watching and listening.