Straggler Tolerant and Resilient DL Training on Homogeneous GPUs
By: Zeyu Zhang, Haiying Shen
Despite the popularity of homogeneous GPU-based deep learning (DL) training, the prevalence, causes and impact of stragglers and the effectiveness of existing straggler mitigation approaches are still not well understood in this scenario due to limited research on these questions. To fill this gap, we conducted comprehensive experiments and found that stragglers remain widespread due to CPU and bandwidth usage imbalances. Additionally, existing mitigation methods that switch from synchronous stochastic gradient descent (SSGD) to asynchronous SGD (ASGD) may not improve Time-To-Accuracy (TTA) and can even generate more stragglers due to its higher resource consumption. To address these newly found problems, we propose the Straggler Tolerant And Resilient DL training system (STAR). STAR includes new synchronization modes that group workers for each parameter updating. It has a heuristic and an ML method to choose the optimal synchronization mode for minimizing TTA, and reallocates resources to support the selected mode while minimizing the impact on co-located jobs. Moreover, it proactively prevents stragglers by avoiding overloading the CPU and bandwidth resources in allocating PSs (which consume high CPU and bandwidth) and in gradient transmission. Our trace-driven evaluation on AWS shows that STAR generates 48-84% and 51-70% lower TTA than state-of-the-art systems in the PS and all-reduce architectures, respectively, while maintaining the converged accuracy of SSGD. The code for STAR is open-sourced.
Similar Papers
Adaptra: Straggler-Resilient Hybrid-Parallel Training with Pipeline Adaptation
Distributed, Parallel, and Cluster Computing
Makes AI training faster by fixing slow parts.
Understanding Stragglers in Large Model Training Using What-if Analysis
Distributed, Parallel, and Cluster Computing
Fixes slow computers to train AI faster.
Distributed Deep Learning using Stochastic Gradient Staleness
Machine Learning (CS)
Trains computer brains much faster for learning.