A Continual Offline Reinforcement Learning Benchmark for Navigation Tasks
By: Anthony Kobanda, Odalric-Ambrym Maillard, Rémy Portelas
Potential Business Impact:
Helps robots learn new tasks without forgetting old ones.
Autonomous agents operating in domains such as robotics or video game simulations must adapt to changing tasks without forgetting about the previous ones. This process called Continual Reinforcement Learning poses non-trivial difficulties, from preventing catastrophic forgetting to ensuring the scalability of the approaches considered. Building on recent advances, we introduce a benchmark providing a suite of video-game navigation scenarios, thus filling a gap in the literature and capturing key challenges : catastrophic forgetting, task adaptation, and memory efficiency. We define a set of various tasks and datasets, evaluation protocols, and metrics to assess the performance of algorithms, including state-of-the-art baselines. Our benchmark is designed not only to foster reproducible research and to accelerate progress in continual reinforcement learning for gaming, but also to provide a reproducible framework for production pipelines -- helping practitioners to identify and to apply effective approaches.
Similar Papers
C-NAV: Towards Self-Evolving Continual Object Navigation in Open World
Robotics
Helps robots learn new things without forgetting old ones.
FindingDory: A Benchmark to Evaluate Memory in Embodied Agents
CV and Pattern Recognition
Helps robots remember and act over time.
An Empirical Study of Deep Reinforcement Learning in Continuing Tasks
Artificial Intelligence
Helps robots learn longer without stopping.