HetRL: Efficient Reinforcement Learning for LLMs in Heterogeneous Environments
By: Yongjun He , Shuai Zhang , Jiading Gai and more
As large language models (LLMs) continue to scale and new GPUs are released even more frequently, there is an increasing demand for LLM post-training in heterogeneous environments to fully leverage underutilized mid-range or previous-generation GPUs across regions and alleviate the shortage of homogeneous high-end GPUs within a single region. However, achieving high-performance reinforcement learning (RL) training for LLMs on such computing resources remains challenging because the workflow involves multiple models and tasks with complex computation and data dependencies. In this paper, we present HetRL, a distributed system for efficient RL training in infrastructures with heterogeneous GPUs and networks. HetRL formulates the scheduling of RL training in heterogeneous environments as a constrained joint optimization problem and introduces a novel scheduling algorithm that (1) decomposes the complex search space with a multi-level search framework; and (2) allocates the search budget via successive halving. Our extensive evaluation, consuming 20,000 GPU-hours, shows that HetRL delivers up to 9.17x the throughput of state-of-the-art systems, and 3.17x on average, under various workloads and settings.
Similar Papers
AReaL-Hex: Accommodating Asynchronous RL Training over Heterogeneous GPUs
Distributed, Parallel, and Cluster Computing
Makes AI learn faster and cheaper on different computers.
Hybrid Learning and Optimization-Based Dynamic Scheduling for DL Workloads on Heterogeneous GPU Clusters
Distributed, Parallel, and Cluster Computing
Makes computer jobs run faster and use less power.
RL in the Wild: Characterizing RLVR Training in LLM Deployment
Artificial Intelligence
Makes smart computer programs learn faster and better.