Score: 1

Continual Reinforcement Learning by Planning with Online World Models

Published: July 12, 2025 | arXiv ID: 2507.09177v1

By: Zichen Liu , Guoji Fu , Chao Du and more

Potential Business Impact:

Keeps robots learning new tricks without forgetting old ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual reinforcement learning (CRL) refers to a naturalistic setting where an agent needs to endlessly evolve, by trial and error, to solve multiple tasks that are presented sequentially. One of the largest obstacles to CRL is that the agent may forget how to solve previous tasks when learning a new task, known as catastrophic forgetting. In this paper, we propose to address this challenge by planning with online world models. Specifically, we learn a Follow-The-Leader shallow model online to capture the world dynamics, in which we plan using model predictive control to solve a set of tasks specified by any reward functions. The online world model is immune to forgetting by construction with a proven regret bound of $\mathcal{O}(\sqrt{K^2D\log(T)})$ under mild assumptions. The planner searches actions solely based on the latest online model, thus forming a FTL Online Agent (OA) that updates incrementally. To assess OA, we further design Continual Bench, a dedicated environment for CRL, and compare with several strong baselines under the same model-planning algorithmic framework. The empirical results show that OA learns continuously to solve new tasks while not forgetting old skills, outperforming agents built on deep world models with various continual learning techniques.

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)