Score: 2

Better World Models Can Lead to Better Post-Training Performance

Published: December 3, 2025 | arXiv ID: 2512.03400v1

By: Prakhar Gupta , Henry Conklin , Sarah-Jane Leslie and more

BigTech Affiliations: Princeton University

Potential Business Impact:

Teaches computers to solve puzzles better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this work we study how explicit world-modeling objectives affect the internal representations and downstream capability of Transformers across different training stages. We use a controlled 2x2x2 Rubik's Cube and ask: (1) how does explicitly pretraining a world model affect the model's latent representations, and (2) how does world-model quality affect the model's performance after reinforcement learning post-training? We compare standard next-token prediction to two explicit world-modeling strategies -- (i) state-prediction pretraining and (ii) a joint state-prediction + next-token objective -- and assess task performance after Group Relative Policy Optimization (GRPO) is applied as post-training. We evaluate the representation quality with linear probes and causal interventions. We find that explicit world-modeling yields more linearly decodable and causally steerable state representations. More importantly, we find that improved state representations lead to higher gains for GRPO, especially on harder cube states. Our results indicate that sharpening state representations can improve the effectiveness of post-training for sequence-planning tasks.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)