Understanding LLM Checkpoint/Restore I/O Strategies and Patterns
By: Mikaila J. Gossman , Avinash Maurya , Bogdan Nicolae and more
Potential Business Impact:
Saves AI training progress much faster.
As LLMs and foundation models scale, checkpoint/restore has become a critical pattern for training and inference. With 3D parallelism (tensor, pipeline, data), checkpointing involves many processes, each managing numerous tensors of varying shapes and sizes, that must be persisted frequently to stable storage (e.g., parallel file systems). This turns checkpoint/restore into a big-data I/O problem characterized by volume, variety, and velocity. The workflow must traverse the full storage stack -- from GPU memory through host memory and local storage to external repositories -- whose tiers differ by orders of magnitude in performance, creating bottlenecks under concurrency even with asynchronous flush/prefetch. Kernel-accelerated I/O libraries such as \texttt{liburing} may mitigate these issues versus POSIX, but their effectiveness for LLM checkpointing remains underexplored. We develop microbenchmarks to quantify trade-offs when using \texttt{liburing}, evaluating how aggregation, alignment, and I/O coalescing interact under buffered and direct I/O. We find that uncoalesced small-buffer operations halve throughput relative to synthetic workloads, while file system-aware aggregation restores bandwidth and reduces metadata overhead. Compared to state-of-the-art LLM checkpointing engines, our approach achieves up to $3.9\times$ higher write throughput than DataStates-LLM and $7.6\times$ higher than TorchSnapshot. These results highlight the need for aggregation and coalescing strategies that align with modern file systems and I/O backends.
Similar Papers
Characterizing Communication Patterns in Distributed Large Language Model Inference
Distributed, Parallel, and Cluster Computing
Makes AI talk faster by fixing how computers share info.
Taming the Memory Footprint Crisis: System Design for Production Diffusion LLM Serving
Distributed, Parallel, and Cluster Computing
Makes AI image creation faster and cheaper.
MLP-Offload: Multi-Level, Multi-Path Offloading for LLM Pre-training to Break the GPU Memory Wall
Distributed, Parallel, and Cluster Computing
Trains giant AI models faster on less hardware.