Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference
By: Rei Taniguchi , Yuyang Dong , Makoto Onizuka and more
Potential Business Impact:
Makes AI smarter and faster using less memory.
Due to the prevalence of large language models (LLMs), key-value (KV) cache reduction for LLM inference has received remarkable attention. Among numerous works that have been proposed in recent years, layer-wise token pruning approaches, which select a subset of tokens at particular layers to retain in KV cache and prune others, are one of the most popular schemes. They primarily adopt a set of pre-defined layers, at which tokens are selected. Such design is inflexible in the sense that the accuracy significantly varies across tasks and deteriorates in harder tasks such as KV retrieval. In this paper, we propose ASL, a training-free method that adaptively chooses the selection layer for KV cache reduction, exploiting the variance of token ranks ordered by attention score. The proposed method balances the performance across different tasks while meeting the user-specified KV budget requirement. ASL operates during the prefilling stage and can be jointly used with existing KV cache reduction methods such as SnapKV to optimize the decoding stage. By evaluations on the InfiniteBench, RULER, and NIAH benchmarks, we show that equipped with one-shot token selection, where tokens are selected at a layer and propagated to deeper layers, ASL outperforms state-of-the-art layer-wise token selection methods in accuracy while maintaining decoding speed and KV cache reduction.
Similar Papers
Leveraging KV Similarity for Online Structured Pruning in LLMs
Computation and Language
Makes AI models faster and smarter without extra training.
LLMCache: Layer-Wise Caching Strategies for Accelerated Reuse in Transformer Inference
Computation and Language
Makes AI answer questions much faster.
Iterative Layer Pruning for Efficient Translation Inference
Computation and Language
Makes translation programs smaller and faster.