KV Cache Recycling to Expand Usable Context Capacity in Low Parameter LLMs
By: Prashant Pandey
Potential Business Impact:
Reuses old computer thoughts to make new ones faster.
Whether attention key value (KV) states computed for one prompt for a small LLM can be reused to accelerate inference on a new similar prompt, giving an increase to the space to its context memory using an approach called token recycling. Using a standard Hugging Face setup with DialoGPT-medium (a 345M parameter GPT-2 style decoder trained on 147M Reddit exchanges, 2005 to 2017) as the testbed, we build a cache of past activations and get entries by sentence embeddings, then reuse cached past key values when the cached prompt is an exact prefix of the new input. We compare recycled vs. baseline runs on latency and output fidelity, and log reuse depth in tokens. Reproducibility requires no model modifications, cached KVs are serialized to the CPU, reloaded, and supplied to the generate function to continue decoding from the cached prefix. In tests, we observe consistent speedups when prefix overlap exists, with no material degradation in output semantics, and when overlap is absent, behavior matches baseline.
Similar Papers
Hold Onto That Thought: Assessing KV Cache Compression On Reasoning
Computation and Language
Helps AI remember more for complex thinking.
Towards More Economical Context-Augmented LLM Generation by Reusing Stored KV Cache
Networking and Internet Architecture
Saves computer time and money by reusing text.
VLCache: Computing 2% Vision Tokens and Reusing 98% for Vision-Language Inference
CV and Pattern Recognition
Saves computer power by remembering past work.