Score: 0

Reversal Invariance in Autoregressive Language Models

Published: November 1, 2025 | arXiv ID: 2511.00341v1

By: Mihir Sahasrabudhe

Potential Business Impact:

Teaches computers to understand word order.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We formalize a structural property of the causal (autoregressive) language modeling (CLM) objective: reversal invariance. Formally, the next-token prediction loss assigns identical likelihood to a corpus and its reversal, implying that standard CLM pretraining is direction-blind. This symmetry explains why models trained on reversed text can achieve comparable performance to those trained on forward text, despite the inherently time-asymmetric nature of human language and reasoning. We argue that this invariance represents a limitation of current pretraining objectives rather than a benign artifact. If natural language encodes directional dependencies - phonological, morphological, or causal - a symmetric objective may fail to capture them. We therefore propose viewing pretraining through the lens of temporal asymmetry, motivating future work on loss functions and architectures that explicitly model the arrow of language while retaining standard language modeling capacity.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
7 pages

Category
Computer Science:
Computation and Language