Score: 2

10Cache: Heterogeneous Resource-Aware Tensor Caching and Migration for LLM Training

Published: November 18, 2025 | arXiv ID: 2511.14124v1

By: Sabiha Afroz , Redwan Ibne Seraj Khan , Hadeel Albahar and more

Potential Business Impact:

Makes AI learn much faster and cheaper.

Business Areas:
Cloud Computing Internet Services, Software

Training large language models (LLMs) in the cloud faces growing memory bottlenecks due to the limited capacity and high cost of GPUs. While GPU memory offloading to CPU and NVMe has made large-scale training more feasible, existing approaches suffer from high tensor migration latency and suboptimal device memory utilization, ultimately increasing training time and cloud costs. To address these challenges, we present 10Cache, a resource-aware tensor caching and migration system that accelerates LLM training by intelligently coordinating memory usage across GPU, CPU, and NVMe tiers. 10Cache profiles tensor execution order to construct prefetch policies, allocates memory buffers in pinned memory based on tensor size distributions, and reuses memory buffers to minimize allocation overhead. Designed for cloud-scale deployments, 10Cache improves memory efficiency and reduces reliance on high-end GPUs. Across diverse LLM workloads, it achieves up to 2x speedup in training time, improves GPU cache hit rate by up to 86.6x, and increases CPU/GPU memory utilization by up to 2.15x and 1.33x, respectively, compared to state-of-the-art offloading methods. These results demonstrate that 10Cache is a practical and scalable solution for optimizing LLM training throughput and resource efficiency in cloud environments.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing