The Bayesian Geometry of Transformer Attention
By: Naman Aggarwal, Siddhartha R. Dalal, Vishal Misra
Transformers often appear to perform Bayesian reasoning in context, but verifying this rigorously has been impossible: natural data lack analytic posteriors, and large models conflate reasoning with memorization. We address this by constructing \emph{Bayesian wind tunnels} -- controlled environments where the true posterior is known in closed form and memorization is provably impossible. In these settings, small transformers reproduce Bayesian posteriors with $10^{-3}$-$10^{-4}$ bit accuracy, while capacity-matched MLPs fail by orders of magnitude, establishing a clear architectural separation. Across two tasks -- bijection elimination and Hidden Markov Model (HMM) state tracking -- we find that transformers implement Bayesian inference through a consistent geometric mechanism: residual streams serve as the belief substrate, feed-forward networks perform the posterior update, and attention provides content-addressable routing. Geometric diagnostics reveal orthogonal key bases, progressive query-key alignment, and a low-dimensional value manifold parameterized by posterior entropy. During training this manifold unfurls while attention patterns remain stable, a \emph{frame-precision dissociation} predicted by recent gradient analyses. Taken together, these results demonstrate that hierarchical attention realizes Bayesian inference by geometric design, explaining both the necessity of attention and the failure of flat architectures. Bayesian wind tunnels provide a foundation for mechanistically connecting small, verifiable systems to reasoning phenomena observed in large language models.
Similar Papers
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Machine Learning (CS)
Makes computers understand space and time better.
Hierarchical Self-Attention: Generalizing Neural Attention Mechanics to Multi-Scale Problems
Machine Learning (CS)
Helps computers understand different kinds of information together.
A Mechanistic Analysis of Transformers for Dynamical Systems
Machine Learning (CS)
Explains why computers predict the future well.