RollMux: Phase-Level Multiplexing for Disaggregated RL Post-Training
By: Tianyuan Wu , Lunxi Cao , Yining Wei and more
Rollout-training disaggregation is emerging as the standard architecture for Reinforcement Learning (RL) post-training, where memory-bound rollout and compute-bound training are physically disaggregated onto purpose-built clusters to maximize hardware efficiency. However, the strict synchronization required by on-policy algorithms introduces severe dependency bubbles, forcing one cluster to idle while the dependent phase is running on the other. We present RollMux, a cluster scheduling framework that reclaims these bubbles through cross-cluster orchestration. RollMux is built on the insight that the structural idleness of one job can be effectively utilized by the active phase of another. To realize this, we introduce the co-execution group abstraction, which partitions the cluster into isolated locality domains. This abstraction enables a two-tier scheduling architecture: an inter-group scheduler that optimizes job placement using conservative stochastic planning, and an intra-group scheduler that orchestrates a provably optimal round-robin schedule. The group abstraction also imposes a residency constraint, ensuring that massive model states remain cached in host memory to enable "warm-star" context switching. We evaluate RollMux on a production-scale testbed with 328 H20 and 328 H800 GPUs. RollMux improves cost efficiency by 1.84x over standard disaggregation and 1.38x over state-of-the-art co-located baselines, all while achieving 100% SLO attainment.
Similar Papers
Part II: ROLL Flash -- Accelerating RLVR and Agentic Training with Asynchrony
Machine Learning (CS)
Makes AI learn faster and use computers better.
RollPacker: Mitigating Long-Tail Rollouts for Fast, Synchronous RL Post-Training
Distributed, Parallel, and Cluster Computing
Makes AI learn faster by fixing computer work.
AReaL-Hex: Accommodating Asynchronous RL Training over Heterogeneous GPUs
Distributed, Parallel, and Cluster Computing
Makes AI learn faster and cheaper on different computers.