Score: 1

Learning to Focus: Focal Attention for Selective and Scalable Transformers

Published: November 10, 2025 | arXiv ID: 2511.06818v1

By: Dhananjay Ram, Wei Xia, Stefano Soatto

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI focus better on important words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Attention is a core component of transformer architecture, whether encoder-only, decoder-only, or encoder-decoder model. However, the standard softmax attention often produces noisy probability distribution, which can impair effective feature selection at every layer of these models, particularly for long contexts. We propose Focal Attention, a simple yet effective modification that sharpens the attention distribution by controlling the softmax temperature, either as a fixed hyperparameter or as a learnable parameter during training. This sharpening enables the model to concentrate on the most relevant tokens while suppressing irrelevant ones. Empirically, Focal Attention scales more favorably than standard transformer with respect to model size, training data, and context length. Across diverse benchmarks, it achieves the same accuracy with up to 42% fewer parameters or 33% less training data. On long-context tasks, it delivers substantial relative improvements ranging from 17% to 82%, demonstrating its effectiveness in real world applications.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language