Inductive Bias and Spectral Properties of Single-Head Attention in High Dimensions
By: Fabrizio Boncoraglio , Vittorio Erba , Emanuele Troiani and more
Potential Business Impact:
Helps AI learn better by understanding how it works.
We study empirical risk minimization in a single-head tied-attention layer trained on synthetic high-dimensional sequence tasks, given by the recently introduced attention-indexed model. Using tools from random matrix theory, spin-glass physics, and approximate message passing, we derive sharp asymptotics for training and test errors, locate interpolation and recovery thresholds, and characterize the limiting spectral distribution of the learned weights. Weight decay induces an implicit nuclear-norm regularization, favoring low-rank query and key matrices. Leveraging this, we compare the standard factorized training of query and key matrices with a direct parameterization in which their product is trained element-wise, revealing the inductive bias introduced by the factorized form. Remarkably, the predicted spectral distribution echoes empirical trends reported in large-scale transformers, offering a theoretical perspective consistent with these phenomena.
Similar Papers
Low Rank Gradients and Where to Find Them
Machine Learning (CS)
Teaches computers to learn better from messy data.
Approximate Gaussianity Beyond Initialisation in Neural Networks
Machine Learning (CS)
Helps computers learn better by understanding their "brain" math.
Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
Machine Learning (Stat)
Makes AI understand words better by analyzing their connections.