FFTrainer: Fast Failover in Large-Language Model Training with Almost-Free State Management
By: Bohan Zhao , Yuanhong Wang , Chenglin Liu and more
Potential Business Impact:
Saves computer training from crashing and speeds it up.
Recent developments in large language models (LLMs) have introduced new requirements for efficient and robust training. As LLM clusters scale, node failures, lengthy recoveries, and bulky checkpoints erode efficiency. Infrequent asynchronous checkpoints trigger costly rollbacks, yet higher frequencies add prohibitive overhead. To address these challenges, we propose FFTrainer, a system designed for robust LLM training. FFTrainer leverages surplus network capacity to quickly save and load states, thereby preventing rollbacks and accelerating recovery. Compared with prior checkpointing approaches, FFTrainer reduces recovery time by up to 98% and mitigates GPU utilization loss by up to 68% without hindering normal training.
Similar Papers
FlashRecovery: Fast and Low-Cost Recovery from Failures for Large-Scale Training of LLMs
Distributed, Parallel, and Cluster Computing
Fixes AI training crashes in seconds.
Adaptive Fault Tolerance Mechanisms of Large Language Models in Cloud Computing Environments
Distributed, Parallel, and Cluster Computing
Keeps AI working even when computers break.
FailSafe: High-performance Resilient Serving
Distributed, Parallel, and Cluster Computing
Keeps AI running smoothly even if parts break.