Time Travel Engine: A Shared Latent Chronological Manifold Enables Historical Navigation in Large Language Models
By: Jingmin An , Wei Liu , Qian Wang and more
Time functions as a fundamental dimension of human cognition, yet the mechanisms by which Large Language Models (LLMs) encode chronological progression remain opaque. We demonstrate that temporal information in their latent space is organized not as discrete clusters but as a continuous, traversable geometry. We introduce the Time Travel Engine (TTE), an interpretability-driven framework that projects diachronic linguistic patterns onto a shared chronological manifold. Unlike surface-level prompting, TTE directly modulates latent representations to induce coherent stylistic, lexical, and conceptual shifts aligned with target eras. By parameterizing diachronic evolution as a continuous manifold within the residual stream, TTE enables fluid navigation through period-specific "zeitgeists" while restricting access to future knowledge. Furthermore, experiments across diverse architectures reveal topological isomorphism between the temporal subspaces of Chinese and English-indicating that distinct languages share a universal geometric logic of historical evolution. These findings bridge historical linguistics with mechanistic interpretability, offering a novel paradigm for controlling temporal reasoning in neural networks.
Similar Papers
The dynamics of meaning through time: Assessment of Large Language Models
Computation and Language
Helps computers understand how words change meaning over time.
The Other Mind: How Language Models Exhibit Human Temporal Cognition
Artificial Intelligence
Computers learn to understand time like people.
Instruction Tuning Chronologically Consistent Language Models
Machine Learning (CS)
Makes AI predictions honest, not cheating with future info.