AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
By: Dinghong Song , Yuan Feng , Yiwei Wang and more
Potential Business Impact:
Makes AI understand text much faster.
Large Language Models (LLMs) are widely used in generative applications such as chatting, code generation, and reasoning. However, many realworld workloads such as classification, question answering, recommendation, and text embedding rely solely on the prefill stage of inference, where the model encodes input sequences without performing autoregressive decoding. In these prefill only scenarios, the self-attention computation becomes the primary performance bottleneck due to its quadratic complexity with respect to sequence length. In this paper, we observe that semantically different sentences often produce similar attention maps across layers and heads. Building on this insight, we propose AttnCache, a framework that accelerates the prefill stage of LLM inference by retrieving and reusing similar attention maps. Based on an attention map memorization database, AttnCache employs efficient caching and similarity search techniques to identify and reuse pre-cached attention maps during inference, thereby reducing the computational overhead of self-attention. Experimental results show that AttnCache achieves an average of 1.2x end-to-end and 2x attention speedup on CPU, and 1.6x end-to-end and 3x attention speedup on GPU, with negligible accuracy degradation.
Similar Papers
AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
Computation and Language
Makes AI understand text much faster.
LLMCache: Layer-Wise Caching Strategies for Accelerated Reuse in Transformer Inference
Computation and Language
Makes AI answer questions much faster.
Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching
Machine Learning (CS)
Makes AI think much faster by using smart memory.