Reliable and Resilient Collective Communication Library for LLM Training and Serving
By: Wei Wang , Nengneng Yu , Sixian Xiong and more
Potential Business Impact:
Keeps big computer jobs running when parts break.
Modern ML training and inference now span tens to tens of thousands of GPUs, where network faults can waste 10--15\% of GPU hours due to slow recovery. Common network errors and link fluctuations trigger timeouts that often terminate entire jobs, forcing expensive checkpoint rollback during training and request reprocessing during inference. We present R$^2$CCL, a fault-tolerant communication library that provides lossless, low-overhead failover by exploiting multi-NIC hardware. R$^2$CCL performs rapid connection migration, bandwidth-aware load redistribution, and resilient collective algorithms to maintain progress under failures. We evaluate R$^2$CCL on two 8-GPU H100 InfiniBand servers and via large-scale ML simulators modeling hundreds of GPUs with diverse failure patterns. Experiments show that R$^2$CCL is highly robust to NIC failures, incurring less than 1\% training and less than 3\% inference overheads. R$^2$CCL outperforms baselines AdapCC and DejaVu by 12.18$\times$ and 47$\times$, respectively.
Similar Papers
An Efficient, Reliable and Observable Collective Communication Library in Large-scale GPU Training Clusters
Distributed, Parallel, and Cluster Computing
Makes AI learn much faster and more reliably.
Collective Communication for 100k+ GPUs
Distributed, Parallel, and Cluster Computing
Speeds up training of huge AI models.
Collective Communication for 100k+ GPUs
Distributed, Parallel, and Cluster Computing
Makes giant AI models train and run faster.