From Word to World: Can Large Language Models be Implicit Text-based World Models?
By: Yixia Li , Hongru Wang , Jiahao Qiu and more
Potential Business Impact:
Helps AI learn faster by practicing in fake worlds.
Agentic reinforcement learning increasingly relies on experience-driven scaling, yet real-world environments remain non-adaptive, limited in coverage, and difficult to scale. World models offer a potential way to improve learning efficiency through simulated experience, but it remains unclear whether large language models can reliably serve this role and under what conditions they meaningfully benefit agents. We study these questions in text-based environments, which provide a controlled setting to reinterpret language modeling as next-state prediction under interaction. We introduce a three-level framework for evaluating LLM-based world models: (i) fidelity and consistency, (ii) scalability and robustness, and (iii) agent utility. Across five representative environments, we find that sufficiently trained world models maintain coherent latent state, scale predictably with data and model size, and improve agent performance via action verification, synthetic trajectory generation, and warm-starting reinforcement learning. Meanwhile, these gains depend critically on behavioral coverage and environment complexity, delineating clear boundry on when world modeling effectively supports agent learning.
Similar Papers
Language-Driven Hierarchical Task Structures as Explicit World Models for Multi-Agent Learning
Artificial Intelligence
Teaches robots to play soccer by explaining rules.
WorldLLM: Improving LLMs' world modeling using curiosity-driven theory-making
Artificial Intelligence
Helps AI understand and predict game worlds better.
Language-conditioned world model improves policy generalization by reading environmental descriptions
Computation and Language
Teaches robots to learn new games from words.