Score: 3

The Surprising Difficulty of Search in Model-Based Reinforcement Learning

Published: January 29, 2026 | arXiv ID: 2601.21306v1

By: Wei-Di Chang , Mikael Henaff , Brandon Amos and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes smart computer games learn faster and better.

Business Areas:
Semantic Search Internet Services

This paper investigates search in model-based reinforcement learning (RL). Conventional wisdom holds that long-term predictions and compounding errors are the primary obstacles for model-based RL. We challenge this view, showing that search is not a plug-and-play replacement for a learned policy. Surprisingly, we find that search can harm performance even when the model is highly accurate. Instead, we show that mitigating distribution shift matters more than improving model or value function accuracy. Building on this insight, we identify key techniques for enabling effective search, achieving state-of-the-art performance across multiple popular benchmark domains.

Country of Origin
🇺🇸 🇨🇦 United States, Canada

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)