A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers
By: William Merrill, Ashish Sabharwal
Potential Business Impact:
Makes computers better at understanding long, step-by-step problems.
Recent theoretical results show transformers cannot express sequential reasoning problems over long inputs, intuitively because their computational depth is bounded. However, prior work treats the depth as a constant, leaving it unclear to what degree bounded depth may suffice for solving problems over short inputs, or how increasing the transformer's depth affects its expressive power. We address these questions by analyzing transformers whose depth can grow minimally with context length $n$. We show even highly uniform transformers with depth $\Theta(\log n)$ can express two important problems: recognizing regular languages, which captures state tracking abilities and was known to be expressible only by an unconventional, non-uniform model of transformers, and graph connectivity, which underlies multi-step reasoning. Notably, both of these problems cannot be expressed by fixed-depth transformers under standard complexity conjectures, demonstrating the expressivity benefit of growing depth. Moreover, our theory quantitatively predicts how depth must grow with input length to express these problems, showing that depth scaling is more efficient than scaling width or chain-of-thought steps. Empirically, our detailed experiments designed to bridge the expressivity vs. learnability gap reveal that our theoretical depth requirements for regular language recognition closely match the practical depth requirements for successfully training transformers. Thus, our results clarify how depth affects a transformer's reasoning capabilities, and provide practical guidance for effective depth selection for sequential reasoning.
Similar Papers
Depth-Width tradeoffs in Algorithmic Reasoning of Graph Tasks with Transformers
Machine Learning (CS)
Makes AI understand complex problems with less thinking.
Exploring Depth Generalization in Large Language Models for Solving Recursive Logic Tasks
Artificial Intelligence
Helps computers solve deeply nested problems.
Knee-Deep in C-RASP: A Transformer Depth Hierarchy
Computation and Language
Deeper AI learns more, like a taller stack.