Interpreting Emergent Planning in Model-Free Reinforcement Learning
By: Thomas Bush , Stephen Chung , Usman Anwar and more
Potential Business Impact:
Computers learn to plan ahead like people.
We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban -- a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a generic model-free agent introduced by Guez et al. (2019), uses learned concept representations to internally formulate plans that both predict the long-term effects of actions on the environment and influence action selection. Our methodology involves: (1) probing for planning-relevant concepts, (2) investigating plan formation within the agent's representations, and (3) verifying that discovered plans (in the agent's representations) have a causal effect on the agent's behavior through interventions. We also show that the emergence of these plans coincides with the emergence of a planning-like property: the ability to benefit from additional test-time compute. Finally, we perform a qualitative analysis of the planning algorithm learned by the agent and discover a strong resemblance to parallelized bidirectional search. Our findings advance understanding of the internal mechanisms underlying planning behavior in agents, which is important given the recent trend of emergent planning and reasoning capabilities in LLMs through RL
Similar Papers
Deep RL Needs Deep Behavior Analysis: Exploring Implicit Planning by Model-Free Agents in Open-Ended Environments
Artificial Intelligence
Shows how computer brains learn like animals.
Model-Free RL Agents Demonstrate System 1-Like Intentionality
Artificial Intelligence
AI learns to act fast, like a gut feeling.
Interpretable Learning Dynamics in Unsupervised Reinforcement Learning
Machine Learning (CS)
Helps robots learn faster by watching what's interesting.