Lil: Less is Less When Applying Post-Training Sparse-Attention Algorithms in Long-Decode Stage
By: Junhao Hu , Fangze Li , Mingtao Xu and more
Potential Business Impact:
Saves computer time by stopping early.
Large language models (LLMs) demonstrate strong capabilities across a wide range of complex tasks and are increasingly deployed at scale, placing significant demands on inference efficiency. Prior work typically decomposes inference into prefill and decode stages, with the decode stage dominating total latency. To reduce time and memory complexity in the decode stage, a line of work introduces sparse-attention algorithms. In this paper, we show, both empirically and theoretically, that sparse attention can paradoxically increase end-to-end complexity: information loss often induces significantly longer sequences, a phenomenon we term ``Less is Less'' (Lil). To mitigate the Lil problem, we propose an early-stopping algorithm that detects the threshold where information loss exceeds information gain during sparse decoding. Our early-stopping algorithm reduces token consumption by up to 90% with a marginal accuracy degradation of less than 2% across reasoning-intensive benchmarks.
Similar Papers
Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning
Computation and Language
Makes smart computers think faster with less effort.
A Preliminary Study on the Promises and Challenges of Native Top-$k$ Sparse Attention
Computation and Language
Makes AI understand long texts faster and better.
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs
Computation and Language
Makes AI understand much longer stories.