Score: 1

A-3PO: Accelerating Asynchronous LLM Training with Staleness-aware Proximal Policy Approximation

Published: December 6, 2025 | arXiv ID: 2512.06547v1

By: Xiaocan Li, Shiliang Wu, Zheng Shen

Potential Business Impact:

Makes AI learn faster without extra work.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Decoupled loss has been a successful reinforcement learning (RL) algorithm to deal with the high data staleness under the asynchronous RL setting. Decoupled loss improves coupled-loss style of algorithms' (e.g., PPO, GRPO) learning stability by introducing a proximal policy to decouple the off-policy corrections (importance weight) from the controlling policy updates (trust region). However, the proximal policy requires an extra forward pass through the network at each training step, creating a computational bottleneck for large language models. We observe that since the proximal policy only serves as a trust region anchor between the behavior and target policies, we can approximate it through simple interpolation without explicit computation. We call this approach A-3PO (APproximated Proximal Policy Optimization). A-3PO eliminates this overhead, reducing training time by 18% while maintaining comparable performance. Code & off-the-shelf example are available at: https://github.com/inclusionAI/AReaL/blob/main/docs/algorithms/prox_approx.md

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)