When Do Transformers Outperform Feedforward and Recurrent Networks? A Statistical Perspective
By: Alireza Mousavi-Hosseini , Clayton Sanford , Denny Wu and more
Potential Business Impact:
Makes AI learn from less data, faster.
Theoretical efforts to prove advantages of Transformers in comparison with classical architectures such as feedforward and recurrent neural networks have mostly focused on representational power. In this work, we take an alternative perspective and prove that even with infinite compute, feedforward and recurrent networks may suffer from larger sample complexity compared to Transformers, as the latter can adapt to a form of dynamic sparsity. Specifically, we consider a sequence-to-sequence data generating model on sequences of length $N$, in which the output at each position depends only on $q$ relevant tokens with $q \ll N$, and the positions of these tokens are described in the input prompt. We prove that a single-layer Transformer can learn this model if and only if its number of attention heads is at least $q$, in which case it achieves a sample complexity almost independent of $N$, while recurrent networks require $N^{\Omega(1)}$ samples on the same problem. If we simplify this model, recurrent networks may achieve a complexity almost independent of $N$, while feedforward networks still require $N$ samples. Consequently, our proposed sparse retrieval model illustrates a natural hierarchy in sample complexity across these architectures.
Similar Papers
The Effect of Attention Head Count on Transformer Approximation
Machine Learning (CS)
More "attention heads" make AI understand better.
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Machine Learning (CS)
Makes AI understand complex patterns better and faster.
Generative Modeling of Networked Time-Series via Transformer Architectures
Machine Learning (CS)
Creates more data to make computer programs smarter.