Safe Guaranteed Dynamics Exploration with Probabilistic Models
By: Manish Prajapat , Johannes Köhler , Melanie N. Zeilinger and more
Potential Business Impact:
Teaches robots to learn safely and fast.
Ensuring both optimality and safety is critical for the real-world deployment of agents, but becomes particularly challenging when the system dynamics are unknown. To address this problem, we introduce a notion of maximum safe dynamics learning via sufficient exploration in the space of safe policies. We propose a $\textit{pessimistically}$ safe framework that $\textit{optimistically}$ explores informative states and, despite not reaching them due to model uncertainty, ensures continuous online learning of dynamics. The framework achieves first-of-its-kind results: learning the dynamics model sufficiently $-$ up to an arbitrary small tolerance (subject to noise) $-$ in a finite time, while ensuring provably safe operation throughout with high probability and without requiring resets. Building on this, we propose an algorithm to maximize rewards while learning the dynamics $\textit{only to the extent needed}$ to achieve close-to-optimal performance. Unlike typical reinforcement learning (RL) methods, our approach operates online in a non-episodic setting and ensures safety throughout the learning process. We demonstrate the effectiveness of our approach in challenging domains such as autonomous car racing and drone navigation under aerodynamic effects $-$ scenarios where safety is critical and accurate modeling is difficult.
Similar Papers
Safely Learning Controlled Stochastic Dynamics
Machine Learning (Stat)
Keeps robots safe while learning new tasks.
Safe Exploration via Policy Priors
Machine Learning (CS)
Lets robots learn safely without crashing.
Probabilistic Shielding for Safe Reinforcement Learning
Machine Learning (Stat)
Keeps robots safe while they learn new tasks.