Adaptra: Straggler-Resilient Hybrid-Parallel Training with Pipeline Adaptation
By: Tianyuan Wu , Lunxi Cao , Hanfeng Lu and more
Potential Business Impact:
Makes AI training faster by fixing slow parts.
Training large Deep Neural Network (DNN) models at scale often encounters straggler issues, mostly in communications due to network congestion, RNIC/switch defects, or topological asymmetry. Under advanced pipeline parallelism, even minor communication delays can induce significant training slowdowns. This occurs because (1) slow communication disrupts the pipeline schedule, creating cascading "bubbles" in a domino effect, and (2) current GPU kernel scheduling is susceptible to head-of-line blocking, where slow communication blocks subsequent computations, further adding to these bubbles. To address these challenges, we present ADAPTRA, a straggler-resilient training system with two key optimizations. First, it optimally adapts the pipeline schedule in the presence of stragglers to absorb communication delays without inducing cascading bubbles, using a simple yet effective algorithm guided by an analytical model. Second, upon detecting slow communication, ADAPTRA offloads communication operations from GPU to host memory and utilizes CPU-side RDMA for data transfer. This eliminates head-of-line blocking as subsequent computation kernels can be scheduled immediately on GPUs. Together, these optimizations effectively reduce pipeline stalls in the presence of communication stragglers, improving the training iteration time by 1.2-3.5x in our experiments under various settings.
Similar Papers
Straggler Tolerant and Resilient DL Training on Homogeneous GPUs
Distributed, Parallel, and Cluster Computing
Makes computer training faster by fixing slow parts.
DawnPiper: A Memory-scablable Pipeline Parallel Training Framework
Distributed, Parallel, and Cluster Computing
Trains bigger computer brains with less memory.
AdaPtis: Reducing Pipeline Bubbles with Adaptive Pipeline Parallelism on Heterogeneous Models
Distributed, Parallel, and Cluster Computing
Trains big computer brains much faster.