Token Sample Complexity of Attention
By: Léa Bohbot , Cyril Letrouit , Gabriel Peyré and more
Potential Business Impact:
Makes AI understand longer stories better.
As context windows in large language models continue to expand, it is essential to characterize how attention behaves at extreme sequence lengths. We introduce token-sample complexity: the rate at which attention computed on $n$ tokens converges to its infinite-token limit. We estimate finite-$n$ convergence bounds at two levels: pointwise uniform convergence of the attention map, and convergence of moments for the transformed token distribution. For compactly supported (and more generally sub-Gaussian) distributions, our first result shows that the attention map converges uniformly on a ball of radius $R$ at rate $C(R)/\sqrt{n}$, where $C(R)$ grows exponentially with $R$. For large $R$, this estimate loses practical value, and our second result addresses this issue by establishing convergence rates for the moments of the transformed distribution (the token output of the attention layer). In this case, the rate is $C'(R)/n^β$ with $β<\tfrac{1}{2}$, and $C'(R)$ depends polynomially on the size of the support of the distribution. The exponent $β$ depends on the attention geometry and the spectral properties of the tokens distribution. We also examine the regime in which the attention parameter tends to infinity and the softmax approaches a hardmax, and in this setting, we establish a logarithmic rate of convergence. Experiments on synthetic Gaussian data and real BERT models on Wikipedia text confirm our predictions.
Similar Papers
Critical attention scaling in long-context transformers
Machine Learning (CS)
Makes AI understand longer stories better.
Limitations of Normalization in Attention Mechanism
Machine Learning (CS)
Makes AI better at picking important words.
A Preliminary Study on the Promises and Challenges of Native Top-$k$ Sparse Attention
Computation and Language
Makes AI understand long texts faster and better.