Communication Efficient LLM Pre-training with SparseLoCo
By: Amir Sarfi , Benjamin Thérien , Joel Lidin and more
Potential Business Impact:
Makes AI learn faster with less data sent.
Communication-efficient distributed training algorithms have received considerable interest recently due to their benefits for training Large Language Models (LLMs) in bandwidth-constrained settings, such as across data centers and over the internet. Despite reducing communication frequency, these methods still typically require communicating a full copy of the model's gradients-resulting in a communication bottleneck even for cross-datacenter links. Furthermore, they can slightly degrade performance compared to a naive AdamW DDP baseline. While quantization and error feedback are often applied to reduce the pseudo-gradient's size, in the context of LLM pre-training, existing approaches have been unable to additionally leverage sparsification and have obtained limited quantization. In this work, we introduce SparseLoCo, a communication-efficient training algorithm for LLMs that effectively leverages Top-k sparsification and quantization to reach extreme compression ratios of up to 1-3% sparsity and 2-bit quantization while outperforming full-precision DiLoCo. Our key observations are that outer momentum can be locally approximated by an error feedback combined with aggressive sparsity and that sparse aggregation can actually improve model performance. We empirically demonstrate in a range of communication-constrained LLM training settings that SparseLoCo provides significant benefits in both performance and communication cost.
Similar Papers
NoLoCo: No-all-reduce Low Communication Training Method for Large Models
Machine Learning (CS)
Trains big AI models using less computer talk.
Strategies for Improving Communication Efficiency in Distributed and Federated Learning: Compression, Local Training, and Personalization
Machine Learning (CS)
Makes AI learn faster with less data sent.
Distributed Low-Communication Training with Decoupled Momentum Optimization
Machine Learning (CS)
Trains big computer brains with less internet.