What Makes Looped Transformers Perform Better Than Non-Recursive Ones (Provably)
By: Zixuan Gong, Jiaye Teng, Yong Liu
Potential Business Impact:
Makes computers learn harder things faster.
While looped transformers (termed as Looped-Attn) often outperform standard transformers (termed as Single-Attn) on complex reasoning tasks, the theoretical basis for this advantage remains underexplored. In this paper, we explain this phenomenon through the lens of loss landscape geometry, inspired by empirical observations of their distinct dynamics at both sample and Hessian levels. To formalize this, we extend the River-Valley landscape model by distinguishing between U-shaped valleys (flat) and V-shaped valleys (steep). Based on empirical observations, we conjecture that the recursive architecture of Looped-Attn induces a landscape-level inductive bias towards River-V-Valley. Theoretical derivations based on this inductive bias guarantee a better loss convergence along the river due to valley hopping, and further encourage learning about complex patterns compared to the River-U-Valley induced by Single-Attn. Building on this insight, we propose SHIFT (Staged HIerarchical Framework for Progressive Training), a staged training framework that accelerates the training process of Looped-Attn while achieving comparable performances.
Similar Papers
Reasoning with Latent Thoughts: On the Power of Looped Transformers
Computation and Language
Makes computers solve hard problems with fewer parts.
The Curved Spacetime of Transformer Architectures
Machine Learning (CS)
Makes AI understand words by bending their meanings.
Neural Algorithmic Reasoning for Hypergraphs with Looped Transformers
Machine Learning (CS)
Helps computers solve complex problems with many connections.