Score: 1

RL's Razor: Why Online Reinforcement Learning Forgets Less

Published: September 4, 2025 | arXiv ID: 2509.04259v1

By: Idan Shenfeld, Jyothish Pari, Pulkit Agrawal

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Keeps AI smart when learning new tricks.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Comparison of fine-tuning models with reinforcement learning (RL) and supervised fine-tuning (SFT) reveals that, despite similar performance at a new task, RL preserves prior knowledge and capabilities significantly better. We find that the degree of forgetting is determined by the distributional shift, measured as the KL-divergence between the fine-tuned and base policy evaluated on the new task. Our analysis reveals that on-policy RL is implicitly biased towards KL-minimal solutions among the many that solve the new task, whereas SFT can converge to distributions arbitrarily far from the base model. We validate these findings through experiments with large language models and robotic foundation models and further provide theoretical justification for why on-policy RL updates lead to a smaller KL change. We term this principle $\textit{RL's Razor}$: among all ways to solve a new task, RL prefers those closest in KL to the original model.

Country of Origin
🇺🇸 United States

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)