RollArt: Scaling Agentic RL Training via Disaggregated Infrastructure
By: Wei Gao , Yuheng Zhao , Tianyuan Wu and more
Potential Business Impact:
Teaches AI to learn and plan faster.
Agentic Reinforcement Learning (RL) enables Large Language Models (LLMs) to perform autonomous decision-making and long-term planning. Unlike standard LLM post-training, agentic RL workloads are highly heterogeneous, combining compute-intensive prefill phases, bandwidth-bound decoding, and stateful, CPU-heavy environment simulations. We argue that efficient agentic RL training requires disaggregated infrastructure to leverage specialized, best-fit hardware. However, naive disaggregation introduces substantial synchronization overhead and resource underutilization due to the complex dependencies between stages. We present RollArc, a distributed system designed to maximize throughput for multi-task agentic RL on disaggregated infrastructure. RollArc is built on three core principles: (1) hardware-affinity workload mapping, which routes compute-bound and bandwidth-bound tasks to bestfit GPU devices, (2) fine-grained asynchrony, which manages execution at the trajectory level to mitigate resource bubbles, and (3) statefulness-aware computation, which offloads stateless components (e.g., reward models) to serverless infrastructure for elastic scaling. Our results demonstrate that RollArc effectively improves training throughput and achieves 1.35-2.05\(\times\) end-to-end training time reduction compared to monolithic and synchronous baselines. We also evaluate RollArc by training a hundreds-of-billions-parameter MoE model for Qoder product on an Alibaba cluster with more than 3,000 GPUs, further demonstrating RollArc scalability and robustness. The code is available at https://github.com/alibaba/ROLL.
Similar Papers
Part II: ROLL Flash -- Accelerating RLVR and Agentic Training with Asynchrony
Machine Learning (CS)
Makes AI learn faster and use computers better.
RollMux: Phase-Level Multiplexing for Disaggregated RL Post-Training
Distributed, Parallel, and Cluster Computing
Makes AI learn faster by sharing computer work.
RollMux: Phase-Level Multiplexing for Disaggregated RL Post-Training
Distributed, Parallel, and Cluster Computing
Makes AI learn faster and cheaper.