Linear Spatial World Models Emerge in Large Language Models
By: Matthieu Tehenan , Christian Bolivar Moya , Tenghai Long and more
Potential Business Impact:
Computers learn how objects are arranged in space.
Large language models (LLMs) have demonstrated emergent abilities across diverse tasks, raising the question of whether they acquire internal world models. In this work, we investigate whether LLMs implicitly encode linear spatial world models, which we define as linear representations of physical space and object configurations. We introduce a formal framework for spatial world models and assess whether such structure emerges in contextual embeddings. Using a synthetic dataset of object positions, we train probes to decode object positions and evaluate geometric consistency of the underlying space. We further conduct causal interventions to test whether these spatial representations are functionally used by the model. Our results provide empirical evidence that LLMs encode linear spatial world models.
Similar Papers
Can LLMs Learn to Map the World from Local Descriptions?
Computation and Language
Helps computers understand maps and find directions.
From Text to Space: Mapping Abstract Spatial Models in LLMs during a Grid-World Navigation Task
Artificial Intelligence
Helps AI understand maps for better navigation.
Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses
Artificial Intelligence
Helps computers combine messy map data better.