A-3PO: Accelerating Asynchronous LLM Training with Staleness-aware Proximal Policy Approximation
By: Xiaocan Li, Shiliang Wu, Zheng Shen
Potential Business Impact:
Makes AI learn faster without extra work.
Decoupled loss has been a successful reinforcement learning (RL) algorithm to deal with the high data staleness under the asynchronous RL setting. Decoupled loss improves coupled-loss style of algorithms' (e.g., PPO, GRPO) learning stability by introducing a proximal policy to decouple the off-policy corrections (importance weight) from the controlling policy updates (trust region). However, the proximal policy requires an extra forward pass through the network at each training step, creating a computational bottleneck for large language models. We observe that since the proximal policy only serves as a trust region anchor between the behavior and target policies, we can approximate it through simple interpolation without explicit computation. We call this approach A-3PO (APproximated Proximal Policy Optimization). A-3PO eliminates this overhead, reducing training time by 18% while maintaining comparable performance. Code & off-the-shelf example are available at: https://github.com/inclusionAI/AReaL/blob/main/docs/algorithms/prox_approx.md
Similar Papers
Deep Gaussian Process Proximal Policy Optimization
Machine Learning (CS)
Helps robots learn safely and explore better.
Periodic Asynchrony: An Effective Method for Accelerating On-Policy Reinforcement Learning
Machine Learning (CS)
Makes computer learning much faster and cheaper.
Truncated Proximal Policy Optimization
Artificial Intelligence
Trains smart computer brains to solve problems faster.