Reversal Invariance in Autoregressive Language Models
By: Mihir Sahasrabudhe
Potential Business Impact:
Teaches computers to understand word order.
We formalize a structural property of the causal (autoregressive) language modeling (CLM) objective: reversal invariance. Formally, the next-token prediction loss assigns identical likelihood to a corpus and its reversal, implying that standard CLM pretraining is direction-blind. This symmetry explains why models trained on reversed text can achieve comparable performance to those trained on forward text, despite the inherently time-asymmetric nature of human language and reasoning. We argue that this invariance represents a limitation of current pretraining objectives rather than a benign artifact. If natural language encodes directional dependencies - phonological, morphological, or causal - a symmetric objective may fail to capture them. We therefore propose viewing pretraining through the lens of temporal asymmetry, motivating future work on loss functions and architectures that explicitly model the arrow of language while retaining standard language modeling capacity.
Similar Papers
Directional Optimization Asymmetry in Transformers: A Synthetic Stress Test
Computation and Language
Makes computers learn tasks backward better.
Memorization, Emergence, and Explaining Reversal Failures: A Controlled Study of Relational Semantics in LLMs
Computation and Language
Makes AI understand "father of" and "son of" logic.
Language Models are Injective and Hence Invertible
Machine Learning (CS)
Lets computers perfectly remember what you typed.