Score: 1

Lil: Less is Less When Applying Post-Training Sparse-Attention Algorithms in Long-Decode Stage

Published: January 6, 2026 | arXiv ID: 2601.03043v1

By: Junhao Hu , Fangze Li , Mingtao Xu and more

Potential Business Impact:

Saves computer time by stopping early.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) demonstrate strong capabilities across a wide range of complex tasks and are increasingly deployed at scale, placing significant demands on inference efficiency. Prior work typically decomposes inference into prefill and decode stages, with the decode stage dominating total latency. To reduce time and memory complexity in the decode stage, a line of work introduces sparse-attention algorithms. In this paper, we show, both empirically and theoretically, that sparse attention can paradoxically increase end-to-end complexity: information loss often induces significantly longer sequences, a phenomenon we term ``Less is Less'' (Lil). To mitigate the Lil problem, we propose an early-stopping algorithm that detects the threshold where information loss exceeds information gain during sparse decoding. Our early-stopping algorithm reduces token consumption by up to 90% with a marginal accuracy degradation of less than 2% across reasoning-intensive benchmarks.

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language