Theoretical Analysis of Positional Encodings in Transformer Models: Impact on Expressiveness and Generalization
By: Yin Li
Potential Business Impact:
Helps AI understand longer stories better.
Positional encodings are a core part of transformer-based models, enabling processing of sequential data without recurrence. This paper presents a theoretical framework to analyze how various positional encoding methods, including sinusoidal, learned, relative, and bias-based methods like Attention with Linear Biases (ALiBi), impact a transformer's expressiveness, generalization ability, and extrapolation to longer sequences. Expressiveness is defined via function approximation, generalization bounds are established using Rademacher complexity, and new encoding methods based on orthogonal functions, such as wavelets and Legendre polynomials, are proposed. The extrapolation capacity of existing and proposed encodings is analyzed, extending ALiBi's biasing approach to a unified theoretical context. Experimental evaluation on synthetic sequence-to-sequence tasks shows that orthogonal transform-based encodings outperform traditional sinusoidal encodings in generalization and extrapolation. This work addresses a critical gap in transformer theory, providing insights for design choices in natural language processing, computer vision, and other transformer applications.
Similar Papers
ExPe: Exact Positional Encodings for Generative Transformer Models with Extrapolating Capabilities
Computation and Language
Lets AI understand longer sentences it hasn't seen.
Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression
Machine Learning (Stat)
Makes AI models less accurate and more easily fooled.
A Comparative Study on Positional Encoding for Time-frequency Domain Dual-path Transformer-based Source Separation Models
Audio and Speech Processing
Separates sounds better, but only for short ones.