TAGC: Optimizing Gradient Communication in Distributed Transformer Training
By: Igor Polyakov, Alexey Dukhanov, Egor Spirin
Potential Business Impact:
Trains AI models 15% faster.
The increasing complexity of large language models (LLMs) necessitates efficient training strategies to mitigate the high computational costs associated with distributed training. A significant bottleneck in this process is gradient synchronization across multiple GPUs, particularly in the zero-redundancy parallelism mode. In this paper, we introduce Transformer-Aware Gradient Compression (TAGC), an optimized gradient compression algorithm designed specifically for transformer-based models. TAGC extends the lossless homomorphic compression method by adapting it for sharded models and incorporating transformer-specific optimizations, such as layer-selective compression and dynamic sparsification. Our experimental results demonstrate that TAGC accelerates training by up to 15% compared to the standard Fully Sharded Data Parallel (FSDP) approach, with minimal impact on model quality. We integrate TAGC into the PyTorch FSDP framework, the implementation is publicly available at https://github.com/ipolyakov/TAGC.
Similar Papers
A Tensor-Train Decomposition based Compression of LLMs on Group Vector Systolic Accelerator
Hardware Architecture
Makes big computer brains run faster on small chips.
EDGC: Entropy-driven Dynamic Gradient Compression for Efficient LLM Training
Machine Learning (CS)
Makes AI learn much faster without losing smartness.
Distributed Low-Communication Training with Decoupled Momentum Optimization
Machine Learning (CS)
Trains big computer brains with less internet.