10Cache: Heterogeneous Resource-Aware Tensor Caching and Migration for LLM Training
By: Sabiha Afroz , Redwan Ibne Seraj Khan , Hadeel Albahar and more
Potential Business Impact:
Makes AI learn much faster and cheaper.
Training large language models (LLMs) in the cloud faces growing memory bottlenecks due to the limited capacity and high cost of GPUs. While GPU memory offloading to CPU and NVMe has made large-scale training more feasible, existing approaches suffer from high tensor migration latency and suboptimal device memory utilization, ultimately increasing training time and cloud costs. To address these challenges, we present 10Cache, a resource-aware tensor caching and migration system that accelerates LLM training by intelligently coordinating memory usage across GPU, CPU, and NVMe tiers. 10Cache profiles tensor execution order to construct prefetch policies, allocates memory buffers in pinned memory based on tensor size distributions, and reuses memory buffers to minimize allocation overhead. Designed for cloud-scale deployments, 10Cache improves memory efficiency and reduces reliance on high-end GPUs. Across diverse LLM workloads, it achieves up to 2x speedup in training time, improves GPU cache hit rate by up to 86.6x, and increases CPU/GPU memory utilization by up to 2.15x and 1.33x, respectively, compared to state-of-the-art offloading methods. These results demonstrate that 10Cache is a practical and scalable solution for optimizing LLM training throughput and resource efficiency in cloud environments.
Similar Papers
Cost-Efficient LLM Training with Lifetime-Aware Tensor Offloading via GPUDirect Storage
Distributed, Parallel, and Cluster Computing
Trains big AI models faster using extra storage.
Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
Hardware Architecture
Makes AI remember more by using faster memory.
Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
Hardware Architecture
Makes AI remember more without slowing down.