Score: 0

Learning When Not to Attend Globally

Published: December 27, 2025 | arXiv ID: 2512.22562v1

By: Xuan Luo, Kailai Zhang, Xifeng Yan

Potential Business Impact:

Computers learn to read like humans, saving energy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

When reading books, humans focus primarily on the current page, flipping back to recap prior context only when necessary. Similarly, we demonstrate that Large Language Models (LLMs) can learn to dynamically determine when to attend to global context. We propose All-or-Here Attention (AHA), which utilizes a binary router per attention head to dynamically toggle between full attention and local sliding window attention for each token. Our results indicate that with a window size of 256 tokens, up to 93\% of the original full attention operations can be replaced by sliding window attention without performance loss. Furthermore, by evaluating AHA across various window sizes, we identify a long-tail distribution in context dependency, where the necessity for full attention decays rapidly as the local window expands. By decoupling local processing from global access, AHA reveals that full attention is largely redundant, and that efficient inference requires only on-demand access to the global context.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
11 pages

Category
Computer Science:
Computation and Language