VLAgents: A Policy Server for Efficient VLA Inference
By: Tobias Jülg , Khaled Gamal , Nisarga Nilavadi and more
Potential Business Impact:
Makes robots understand and do tasks faster.
The rapid emergence of Vision-Language-Action models (VLAs) has a significant impact on robotics. However, their deployment remains complex due to the fragmented interfaces and the inherent communication latency in distributed setups. To address this, we introduce VLAgents, a modular policy server that abstracts VLA inferencing behind a unified Gymnasium-style protocol. Crucially, its communication layer transparently adapts to the context by supporting both zero-copy shared memory for high-speed simulation and compressed streaming for remote hardware. In this work, we present the architecture of VLAgents and validate it by integrating seven policies -- including OpenVLA and Pi Zero. In a benchmark with both local and remote communication, we further demonstrate how it outperforms the default policy servers provided by OpenVLA, OpenPi, and LeRobot. VLAgents is available at https://github.com/RobotControlStack/vlagents
Similar Papers
Towards Deploying VLA without Fine-Tuning: Plug-and-Play Inference-Time VLA Policy Steering via Embodied Evolutionary Diffusion
Robotics
Robots follow instructions better without retraining.
Improving Pre-Trained Vision-Language-Action Policies with Model-Based Search
Robotics
Robots learn to do tasks better by planning ahead.
HyperVLA: Efficient Inference in Vision-Language-Action Models via Hypernetworks
Robotics
Makes robots learn faster and cheaper.