AsyncVLA: Asynchronous Flow Matching for Vision-Language-Action Models
By: Yuhua Jiang , Shuang Cheng , Yan Ding and more
Potential Business Impact:
Robots learn to fix their own mistakes.
Vision-language-action (VLA) models have recently emerged as a powerful paradigm for building generalist robots. However, traditional VLA models that generate actions through flow matching (FM) typically rely on rigid and uniform time schedules, i.e., synchronous FM (SFM). Without action context awareness and asynchronous self-correction, SFM becomes unstable in long-horizon tasks, where a single action error can cascade into failure. In this work, we propose asynchronous flow matching VLA (AsyncVLA), a novel framework that introduces temporal flexibility in asynchronous FM (AFM) and enables self-correction in action generation. AsyncVLA breaks from the vanilla SFM in VLA models by generating the action tokens in a non-uniform time schedule with action context awareness. Besides, our method introduces the confidence rater to extract confidence of the initially generated actions, enabling the model to selectively refine inaccurate action tokens before execution. Moreover, we propose a unified training procedure for SFM and AFM that endows a single model with both modes, improving KV-cache utilization. Extensive experiments on robotic manipulation benchmarks demonstrate that AsyncVLA is data-efficient and exhibits self-correction ability. AsyncVLA achieves state-of-the-art results across general embodied evaluations due to its asynchronous generation in AFM. Our code is available at https://github.com/YuhuaJiang2002/AsyncVLA.
Similar Papers
VLASH: Real-Time VLAs via Future-State-Aware Asynchronous Inference
Robotics
Makes robots react instantly to what they see.
FPC-VLA: A Vision-Language-Action Framework with a Supervisor for Failure Prediction and Correction
Robotics
Robots learn to fix their own mistakes.
ACG: Action Coherence Guidance for Flow-based VLA models
Robotics
Makes robots move smoother and more accurately.