Score: 0

Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges

Published: November 19, 2025 | arXiv ID: 2511.15652v1

By: Kim N. Nolle , Ivana Dusparic , Rhodri Cusack and more

Potential Business Impact:

Teaches self-driving cars to learn new parking spots.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual learning (CL) is a branch of machine learning that aims to enable agents to adapt and generalise previously learned abilities so that these can be reapplied to new tasks or environments. This is particularly useful in multi-task settings or in non-stationary environments, where the dynamics can change over time. This is particularly relevant in cyber-physical systems such as autonomous driving. However, despite recent advances in CL, successfully applying it to reinforcement learning (RL) is still an open problem. This paper highlights open challenges in continual RL (CRL) based on experiments in an autonomous driving environment. In this environment, the agent must learn to successfully park in four different scenarios corresponding to parking spaces oriented at varying angles. The agent is successively trained in these four scenarios one after another, representing a CL environment, using Proximal Policy Optimisation (PPO). These experiments exposed a number of open challenges in CRL: finding suitable abstractions of the environment, oversensitivity to hyperparameters, catastrophic forgetting, and efficient use of neural network capacity. Based on these identified challenges, we present open research questions that are important to be addressed for creating robust CRL systems. In addition, the identified challenges call into question the suitability of neural networks for CL. We also identify the need for interdisciplinary research, in particular between computer science and neuroscience.

Country of Origin
🇮🇪 Ireland

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)