Score: 1

GEPO: Group Expectation Policy Optimization for Stable Heterogeneous Reinforcement Learning

Published: August 25, 2025 | arXiv ID: 2508.17850v5

By: Han Zhang , Ruibin Zheng , Zexuan Yi and more

Potential Business Impact:

Trains smart computer programs far apart.

Business Areas:
A/B Testing Data and Analytics

As single-center computing approaches power constraints, decentralized training becomes essential. However, traditional Reinforcement Learning (RL) methods, crucial for enhancing large model post-training, cannot adapt to decentralized distributed training due to the tight coupling between parameter learning and rollout sampling. For this, we propose HeteroRL, a heterogeneous RL architecture that decouples these processes, enabling stable training across geographically distributed nodes connected via the Internet. The core component is Group Expectation Policy Optimization (GEPO), an asynchronous RL algorithm robust to latency caused by network delays or heterogeneity in computational resources. Our study reveals that high latency significantly increases KL divergence, leading to higher variance in importance sampling weights and training instability. GEPO mitigates this issue by using group expectation weighting to exponentially reduce the variance of importance weights, with theoretical guarantees. Experiments show that GEPO achieves superior stability, with only a 3\% performance drop from online to 1800s latency, demonstrating strong potential for decentralized RL in geographically distributed, resource-heterogeneous computing environments.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)