Score: 1

Reliable and Resilient Collective Communication Library for LLM Training and Serving

Published: December 31, 2025 | arXiv ID: 2512.25059v1

By: Wei Wang , Nengneng Yu , Sixian Xiong and more

Potential Business Impact:

Keeps big computer jobs running when parts break.

Business Areas:
Cloud Computing Internet Services, Software

Modern ML training and inference now span tens to tens of thousands of GPUs, where network faults can waste 10--15\% of GPU hours due to slow recovery. Common network errors and link fluctuations trigger timeouts that often terminate entire jobs, forcing expensive checkpoint rollback during training and request reprocessing during inference. We present R$^2$CCL, a fault-tolerant communication library that provides lossless, low-overhead failover by exploiting multi-NIC hardware. R$^2$CCL performs rapid connection migration, bandwidth-aware load redistribution, and resilient collective algorithms to maintain progress under failures. We evaluate R$^2$CCL on two 8-GPU H100 InfiniBand servers and via large-scale ML simulators modeling hundreds of GPUs with diverse failure patterns. Experiments show that R$^2$CCL is highly robust to NIC failures, incurring less than 1\% training and less than 3\% inference overheads. R$^2$CCL outperforms baselines AdapCC and DejaVu by 12.18$\times$ and 47$\times$, respectively.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing