SHIFT: An RDMA Failure-Resilient Layer for Distributed Training
By: Shengkai Lin , Kairui Zhou , Yibo Wu and more
With gang scheduling in large-scale distributed Large Language Model training, a single network anomaly can propagate and cause complete task failure. The frequency of such anomalies increases with network scale. However, existing fault-tolerance mechanisms, such as checkpointing and runtime resilience methods, primarily operate at the application layer and inevitably cause disruptions in training progress. We propose to address this challenge by introducing fault tolerance at the Remote Direct Memory Access (RDMA) layer and integrating it with existing application-layer techniques. We present SHIFT, a fault-resilient layer over RDMA that enables seamless redirection of RDMA traffic across different intra-host NICs. By allowing applications to continue execution in the presence of network anomalies until the next checkpoint, SHIFT effectively minimizes training progress loss. SHIFT is designed to be application-agnostic, transparent to applications, and low-overhead. Through a carefully designed failure state machine and control flow, unmodified applications such as PyTorch with NCCL can run with RDMA-level fault tolerance. Experimental results demonstrate that SHIFT introduces minimal data path overhead while ensuring application continuity under network failures.
Similar Papers
Reimagining RDMA Through the Lens of ML
Distributed, Parallel, and Cluster Computing
Makes AI training much faster and more reliable.
Adaptive Anomaly Detection in Evolving Network Environments
Cryptography and Security
Keeps computer security systems working even when data changes.
SHIFT: An Interdisciplinary Framework for Scaffolding Human Attention and Understanding in Explanatory Tasks
Human-Computer Interaction
Robots learn to teach people better.