Adaptive Cache Pollution Control for Large Language Model Inference Workloads Using Temporal CNN-Based Prediction and Priority-Aware Replacement
By: Songze Liu, Hongkun Du, Shaowen Wang
Potential Business Impact:
Makes AI models run faster and use less memory.
Large Language Models (LLMs), such as GPT and LLaMA, introduce unique memory access characteristics during inference due to frequent token sequence lookups and embedding vector retrievals. These workloads generate highly irregular and bursty access patterns, causing traditional prefetching and replacement policies to mispredict and trigger severe cache pollution, thereby degrading system performance. To address this challenge, this paper proposes an Adaptive Cache Pollution Control (ACPC) mechanism tailored for LLM inference workloads, integrating Temporal Convolutional Network (TCN)-based access prediction with a priority-aware replacement strategy. The TCN module learns temporal dependencies in token access sequences to identify potential high-reuse cache lines, while the replacement policy dynamically adjusts eviction priorities based on predicted reuse likelihood and cache occupancy. The proposed framework is implemented and evaluated on representative transformer-based inference traces, including GPT-style autoregressive decoding and embedding retrieval workloads. Experimental results demonstrate that ACPC reduces cache pollution by 41.7 percent, improves cache hit rate by 8.9 percent, and achieves a 60.0 percent reduction in L2 miss penalty, compared with state-of-the-art machine-learning-based replacement baselines. Additionally, the proposed Temporal CNN-based ACPC framework increases token generation throughput by 15.9 percent and achieves the lowest final loss of 0.21, confirming its superior efficiency and stability under complex LLM inference workloads. These results highlight ACPC's effectiveness in recognizing useful cache lines and mitigating redundant prefetches under dynamic LLM access behaviors. The proposed approach provides a scalable, learning-driven solution for optimizing memory efficiency and latency in large-scale LLM serving and inference systems.
Similar Papers
LLaMCAT: Optimizing Large Language Model Inference with Cache Arbitration and Throttling
Hardware Architecture
Makes AI models run much faster on computers.
Efficient Online Learning with Predictive Coding Networks: Exploiting Temporal Correlations
Machine Learning (CS)
Robots learn faster with less effort.
DCO: Dynamic Cache Orchestration for LLM Accelerators through Predictive Management
Hardware Architecture
Makes AI faster by sharing computer memory.