Continual Reinforcement Learning by Planning with Online World Models
By: Zichen Liu , Guoji Fu , Chao Du and more
Potential Business Impact:
Keeps robots learning new tricks without forgetting old ones.
Continual reinforcement learning (CRL) refers to a naturalistic setting where an agent needs to endlessly evolve, by trial and error, to solve multiple tasks that are presented sequentially. One of the largest obstacles to CRL is that the agent may forget how to solve previous tasks when learning a new task, known as catastrophic forgetting. In this paper, we propose to address this challenge by planning with online world models. Specifically, we learn a Follow-The-Leader shallow model online to capture the world dynamics, in which we plan using model predictive control to solve a set of tasks specified by any reward functions. The online world model is immune to forgetting by construction with a proven regret bound of $\mathcal{O}(\sqrt{K^2D\log(T)})$ under mild assumptions. The planner searches actions solely based on the latest online model, thus forming a FTL Online Agent (OA) that updates incrementally. To assess OA, we further design Continual Bench, a dedicated environment for CRL, and compare with several strong baselines under the same model-planning algorithmic framework. The empirical results show that OA learns continuously to solve new tasks while not forgetting old skills, outperforming agents built on deep world models with various continual learning techniques.
Similar Papers
Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges
Machine Learning (CS)
Teaches self-driving cars to learn new parking spots.
Ergodic Risk Measures: Towards a Risk-Aware Foundation for Continual Reinforcement Learning
Machine Learning (CS)
Helps robots learn new things without forgetting old ones.
Continual Knowledge Adaptation for Reinforcement Learning
Artificial Intelligence
Helps robots learn new jobs without forgetting old ones.