Score: 0

FFTrainer: Fast Failover in Large-Language Model Training with Almost-Free State Management

Published: December 3, 2025 | arXiv ID: 2512.03644v1

By: Bohan Zhao , Yuanhong Wang , Chenglin Liu and more

Potential Business Impact:

Saves computer training from crashing and speeds it up.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent developments in large language models (LLMs) have introduced new requirements for efficient and robust training. As LLM clusters scale, node failures, lengthy recoveries, and bulky checkpoints erode efficiency. Infrequent asynchronous checkpoints trigger costly rollbacks, yet higher frequencies add prohibitive overhead. To address these challenges, we propose FFTrainer, a system designed for robust LLM training. FFTrainer leverages surplus network capacity to quickly save and load states, thereby preventing rollbacks and accelerating recovery. Compared with prior checkpointing approaches, FFTrainer reduces recovery time by up to 98% and mitigates GPU utilization loss by up to 68% without hindering normal training.

Country of Origin
🇨🇳 China

Page Count
13 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing