Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
By: Yunhua Fang , Rui Xie , Asad Ul Haq and more
Potential Business Impact:
Makes AI remember more without slowing down.
Large Language Model (LLM) inference is increasingly constrained by memory bandwidth, with frequent access to the key-value (KV) cache dominating data movement. While attention sparsity reduces some memory traffic, the relevance of past tokens varies over time, requiring the full KV cache to remain accessible and sustaining pressure on both bandwidth and capacity. With advances in interconnects such as NVLink and LPDDR5X, modern AI hardware now integrates high-bandwidth memory (HBM) with high-speed off-package DRAM, making heterogeneous memory systems a practical solution. This work investigates dynamic KV cache placement across such systems to maximize aggregated bandwidth utilization under capacity constraints. Rather than proposing a specific scheduling policy, we formulate the placement problem mathematically and derive a theoretical upper bound, revealing substantial headroom for runtime optimization. To our knowledge, this is the first formal treatment of dynamic KV cache scheduling in heterogeneous memory systems for LLM inference.
Similar Papers
Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
Hardware Architecture
Makes AI remember more by using faster memory.
Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching
Machine Learning (CS)
Makes AI think much faster by using smart memory.
Hardware-based Heterogeneous Memory Management for Large Language Model Inference
Hardware Architecture
Makes AI models run faster on less memory.