Allocation of Parameters in Transformers
By: Ruoxi Yu , Haotian Jiang , Jingpu Cheng and more
Potential Business Impact:
Makes smart computer programs work faster and better.
Transformers have achieved remarkable successes across a wide range of applications, yet the theoretical foundation of their model efficiency remains underexplored. In this work, we investigate how the model parameters -- mainly attention heads and head dimensions -- should be allocated across layers to balance expressivity and efficiency. We first provide mathematical analysis on the role of early layers in information extraction from an approximation perspective, with a theoretical characterization on the trade-off between the number of heads and head dimension under a fixed parameter budget. In addition, we uncover and prove the \emph{saturation} behavior of softmax activations: Continuously increasing head dimensions can lead to diminishing returns in learning errors, particularly for long sequences. Supported by both theory and experiments, this saturation pattern suggests that later layers can operate more efficiently with reduced parameters. Combining these insights, we propose principled strategies for allocating attention heads and dimensions across Transformers' layers, shedding light on theoretically-grounded model efficiency of Transformer-based architectures.
Similar Papers
The Effect of Attention Head Count on Transformer Approximation
Machine Learning (CS)
More "attention heads" make AI understand better.
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Machine Learning (CS)
Makes AI understand complex patterns better and faster.
Intrinsic and Extrinsic Organized Attention: Softmax Invariance and Network Sparsity
Numerical Analysis
Makes AI understand itself better for new uses.