Position as Probability: Self-Supervised Transformers that Think Past Their Training for Length Extrapolation
By: Philip Heejun Lee
Potential Business Impact:
Lets computers solve problems much longer than trained.
Deep sequence models typically degrade in accuracy when test sequences significantly exceed their training lengths, yet many critical tasks--such as algorithmic reasoning, multi-step arithmetic, and compositional generalization--require robust length extrapolation. We introduce PRISM, a Probabilistic Relative-position Implicit Superposition Model, a novel positional encoding mechanism that enables Transformers to extrapolate accurately up to 10x beyond their training length. PRISM learns continuous relative positions through a differentiable histogram-filter update, preserving position uncertainty via a probabilistic superposition rather than conventional deterministic embeddings. Empirically, PRISM achieves state-of-the-art length extrapolation, successfully generalizing to previously intractable sequence lengths across algorithmic benchmarks--including arithmetic (addition, multiplication), SCAN compositionality tasks, and complex copy variants derived from DeepMind's recent datasets. Our analysis demonstrates that PRISM's stochastic positional encoding maintains sharp and interpretable internal states, providing a theoretical basis for reliable length generalization. These results advance the goal of neural sequence models that remain algorithmically robust at lengths far exceeding their training horizon.
Similar Papers
Pay Attention Later: From Vector Space Diffusion to Linearithmic Spectral Phase-Locking
Machine Learning (CS)
Lets AI learn new things without forgetting old ones.
ExPe: Exact Positional Encodings for Generative Transformer Models with Extrapolating Capabilities
Computation and Language
Lets AI understand longer sentences it hasn't seen.
SeqPE: Transformer with Sequential Position Encoding
Machine Learning (CS)
Helps AI understand longer texts and images.