Score: 0

Scaling Bidirectional Spans and Span Violations in Attention Mechanism

Published: December 15, 2025 | arXiv ID: 2512.13033v1

By: Jongwook Kim, Sangheon Yun, Sukjin Yoon

Potential Business Impact:

Makes AI learn faster by fixing its thinking.

Business Areas:
A/B Testing Data and Analytics

The canonical $O(N^2)$ Transformer remains the empirical performance frontier in sequence modeling, and its training can be further optimized by addressing geometric inefficiency. We propose an optimization framework that leverages an asymmetric projection to decompose the backward-pass gradients into parallel spans and orthogonal violations, while keeping the canonical forward-pass $QKV$ structure intact. Through consistent experimental validation across various decomposition and projection setups, we provide strong theoretical evidence: the standard attention gradient is suboptimal. We demonstrated that selectively scaling these components, focusing primarily on $0^{th}$ order bidirectional parallel spans, yields the most effective learning signal. On the limited WikiText-2 dataset, and using a crude configuration, this method achieved a $0.56\%$ reduction in validation loss, confirming the framework's fundamental validity and suggesting significant potential gains on larger datasets and deeper training regimes

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)