Leveraging KV Similarity for Online Structured Pruning in LLMs
By: Jungmin Lee , Gwangeun Byeon , Yulhwa Kim and more
Potential Business Impact:
Makes AI models faster and smarter without extra training.
Pruning has emerged as a promising direction for accelerating large language model (LLM) inference, yet existing approaches often suffer from instability because they rely on offline calibration data that may not generalize across inputs. In this work, we introduce Token Filtering, a lightweight online structured pruning technique that makes pruning decisions directly during inference without any calibration data. The key idea is to measure token redundancy via joint key-value similarity and skip redundant attention computations, thereby reducing inference cost while preserving critical information. To further enhance stability, we design a variance-aware fusion strategy that adaptively weights key and value similarity across heads, ensuring that informative tokens are retained even under high pruning ratios. This design introduces no additional memory overhead and provides a more reliable criterion for token importance. Extensive experiments on LLaMA-2 (7B/13B), LLaMA-3 (8B), and Mistral (7B) demonstrate that Token Filtering consistently outperforms prior structured pruning methods, preserving accuracy on commonsense reasoning benchmarks and maintaining strong performance on challenging tasks such as MMLU, even with 50% pruning.
Similar Papers
Towards Efficient VLMs: Information-Theoretic Driven Compression via Adaptive Structural Pruning
CV and Pattern Recognition
Makes AI models smaller and faster.
Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs
Machine Learning (CS)
Keeps important computer memories for faster AI.
Keyframe-oriented Vision Token Pruning: Enhancing Efficiency of Large Vision Language Models on Long-Form Video Processing
Machine Learning (CS)
Makes computers understand long videos faster.