Periodic Asynchrony: An Effective Method for Accelerating On-Policy Reinforcement Learning
By: Jian Lu
Potential Business Impact:
Makes computer learning much faster and cheaper.
Since the introduction of the GRPO algorithm, reinforcement learning (RL) has attracted increasing attention, with growing efforts to reproduce and apply it. However, training efficiency remains a critical challenge. In mainstream RL frameworks, inference and training are typically deployed on the same devices. While this approach reduces costs through resource consolidation, its synchronous execution imposes a computational coupling that prevents concurrent inference and training. In this study, we are returning to the strategy of separating inference and training deployment, and by introducing improvements in the data loader, we transform the conventional synchronous architecture into a periodically asynchronous framework, which allows for demand-driven, independent, and elastic scaling of each component, while the accuracy of the algorithm remains completely equivalent to the synchronization method, with both belonging to the on-policy strategy. It is worth emphasizing that we apply a unified tri-model architecture in the training phase, and we also proposed a shared-prompt attention mask to reduce repetitive computation. In practice, these works have achieved at least a threefold overall performance improvement in RL training on NPU platforms, indicating its potential for widespread application.
Similar Papers
Synchronous vs Asynchronous Reinforcement Learning in a Real World Robot
Robotics
Robots learn and react much faster.
Part II: ROLL Flash -- Accelerating RLVR and Agentic Training with Asynchrony
Machine Learning (CS)
Makes AI learn faster and use computers better.
A-3PO: Accelerating Asynchronous LLM Training with Staleness-aware Proximal Policy Approximation
Machine Learning (CS)
Makes AI learn faster without extra work.