Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
By: Tomohiro Hayase, Benoît Collins, Ryo Karakida
Potential Business Impact:
Makes AI understand words better by analyzing their connections.
Self-attention layers have become fundamental building blocks of modern deep neural networks, yet their theoretical understanding remains limited, particularly from the perspective of random matrix theory. In this work, we provide a rigorous analysis of the singular value spectrum of the attention matrix and establish the first Gaussian equivalence result for attention. In a natural regime where the inverse temperature remains of constant order, we show that the singular value distribution of the attention matrix is asymptotically characterized by a tractable linear model. We further demonstrate that the distribution of squared singular values deviates from the Marchenko-Pastur law, which has been believed in previous work. Our proof relies on two key ingredients: precise control of fluctuations in the normalization term and a refined linearization that leverages favorable Taylor expansions of the exponential. This analysis also identifies a threshold for linearization and elucidates why attention, despite not being an entrywise operation, admits a rigorous Gaussian equivalence in this regime.
Similar Papers
Spectral analysis of spatial-sign covariance matrices for heavy-tailed data with dependence
Statistics Theory
Helps computers understand messy data better.
Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain
Machine Learning (CS)
Helps computers understand complex patterns better.
Asymptotic behavior of eigenvalues of large rank perturbations of large random matrices
Mathematical Physics
Makes smart computer programs learn better.