ActionFlow: A Pipelined Action Acceleration for Vision Language Models on Edge
By: Yuntao Dai , Hang Gu , Teng Wang and more
Vision-Language-Action (VLA) models have emerged as a unified paradigm for robotic perception and control, enabling emergent generalization and long-horizon task execution. However, their deployment in dynamic, real-world environments is severely hin dered by high inference latency. While smooth robotic interaction requires control frequencies of 20 to 30 Hz, current VLA models typi cally operate at only 3-5 Hz on edge devices due to the memory bound nature of autoregressive decoding. Existing optimizations often require extensive retraining or compromise model accuracy. To bridge this gap, we introduce ActionFlow, a system-level inference framework tailored for resource-constrained edge plat forms. At the core of ActionFlow is a Cross-Request Pipelin ing strategy, a novel scheduler that redefines VLA inference as a macro-pipeline of micro-requests. The strategy intelligently batches memory-bound Decode phases with compute-bound Prefill phases across continuous time steps to maximize hardware utilization. Furthermore, to support this scheduling, we propose a Cross Request State Packed Forward operator and a Unified KV Ring Buffer, which fuse fragmented memory operations into efficient dense computations. Experimental results demonstrate that ActionFlow achieves a 2.55x improvement in FPS on the OpenVLA-7B model without retraining, enabling real-time dy namic manipulation on edge hardware. Our work is available at https://anonymous.4open.science/r/ActionFlow-1D47.
Similar Papers
AsyncVLA: Asynchronous Flow Matching for Vision-Language-Action Models
Robotics
Robots learn to fix their own mistakes.
FlowVLA: Thinking in Motion with a Visual Chain of Thought
Robotics
Helps robots learn to move and act better.
NinA: Normalizing Flows in Action. Training VLA Models with Normalizing Flows
CV and Pattern Recognition
Makes robots move faster and smarter.