Better World Models Can Lead to Better Post-Training Performance
By: Prakhar Gupta , Henry Conklin , Sarah-Jane Leslie and more
Potential Business Impact:
Teaches computers to solve puzzles better.
In this work we study how explicit world-modeling objectives affect the internal representations and downstream capability of Transformers across different training stages. We use a controlled 2x2x2 Rubik's Cube and ask: (1) how does explicitly pretraining a world model affect the model's latent representations, and (2) how does world-model quality affect the model's performance after reinforcement learning post-training? We compare standard next-token prediction to two explicit world-modeling strategies -- (i) state-prediction pretraining and (ii) a joint state-prediction + next-token objective -- and assess task performance after Group Relative Policy Optimization (GRPO) is applied as post-training. We evaluate the representation quality with linear probes and causal interventions. We find that explicit world-modeling yields more linearly decodable and causally steerable state representations. More importantly, we find that improved state representations lead to higher gains for GRPO, especially on harder cube states. Our results indicate that sharpening state representations can improve the effectiveness of post-training for sequence-planning tasks.
Similar Papers
Clone Deterministic 3D Worlds with Geometrically-Regularized World Models
Machine Learning (CS)
Helps robots learn and predict world changes better.
GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Robotics
Teaches robots to predict and act in 3D.
GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Robotics
Teaches robots to predict and act in 3D.