GEPO: Group Expectation Policy Optimization for Stable Heterogeneous Reinforcement Learning
By: Han Zhang , Ruibin Zheng , Zexuan Yi and more
Potential Business Impact:
Trains smart computer programs far apart.
As single-center computing approaches power constraints, decentralized training becomes essential. However, traditional Reinforcement Learning (RL) methods, crucial for enhancing large model post-training, cannot adapt to decentralized distributed training due to the tight coupling between parameter learning and rollout sampling. For this, we propose HeteroRL, a heterogeneous RL architecture that decouples these processes, enabling stable training across geographically distributed nodes connected via the Internet. The core component is Group Expectation Policy Optimization (GEPO), an asynchronous RL algorithm robust to latency caused by network delays or heterogeneity in computational resources. Our study reveals that high latency significantly increases KL divergence, leading to higher variance in importance sampling weights and training instability. GEPO mitigates this issue by using group expectation weighting to exponentially reduce the variance of importance weights, with theoretical guarantees. Experiments show that GEPO achieves superior stability, with only a 3\% performance drop from online to 1800s latency, demonstrating strong potential for decentralized RL in geographically distributed, resource-heterogeneous computing environments.
Similar Papers
Group Expectation Policy Optimization for Stable Heterogeneous Reinforcement Learning in LLMs
Machine Learning (CS)
Makes AI learn better even with slow internet.
Graph-Enhanced Policy Optimization in LLM Agent Training
Artificial Intelligence
Teaches AI to learn better by seeing connections.
HAEPO: History-Aggregated Exploratory Policy Optimization
Machine Learning (CS)
Helps AI learn better by remembering past steps.