Score: 2

Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference

Published: January 12, 2026 | arXiv ID: 2601.07667v1

By: Rei Taniguchi , Yuyang Dong , Makoto Onizuka and more

Potential Business Impact:

Makes AI smarter and faster using less memory.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Due to the prevalence of large language models (LLMs), key-value (KV) cache reduction for LLM inference has received remarkable attention. Among numerous works that have been proposed in recent years, layer-wise token pruning approaches, which select a subset of tokens at particular layers to retain in KV cache and prune others, are one of the most popular schemes. They primarily adopt a set of pre-defined layers, at which tokens are selected. Such design is inflexible in the sense that the accuracy significantly varies across tasks and deteriorates in harder tasks such as KV retrieval. In this paper, we propose ASL, a training-free method that adaptively chooses the selection layer for KV cache reduction, exploiting the variance of token ranks ordered by attention score. The proposed method balances the performance across different tasks while meeting the user-specified KV budget requirement. ASL operates during the prefilling stage and can be jointly used with existing KV cache reduction methods such as SnapKV to optimize the decoding stage. By evaluations on the InfiniteBench, RULER, and NIAH benchmarks, we show that equipped with one-shot token selection, where tokens are selected at a layer and propagated to deeper layers, ASL outperforms state-of-the-art layer-wise token selection methods in accuracy while maintaining decoding speed and KV cache reduction.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language