Learning Safe Control via On-the-Fly Bandit Exploration
By: Alexandre Capone , Ryan Cosner , Aaaron Ames and more
Potential Business Impact:
Keeps robots safe while they learn new things.
Control tasks with safety requirements under high levels of model uncertainty are increasingly common. Machine learning techniques are frequently used to address such tasks, typically by leveraging model error bounds to specify robust constraint-based safety filters. However, if the learned model uncertainty is very high, the corresponding filters are potentially invalid, meaning no control input satisfies the constraints imposed by the safety filter. While most works address this issue by assuming some form of safe backup controller, ours tackles it by collecting additional data on the fly using a Gaussian process bandit-type algorithm. We combine a control barrier function with a learned model to specify a robust certificate that ensures safety if feasible. Whenever infeasibility occurs, we leverage the control barrier function to guide exploration, ensuring the collected data contributes toward the closed-loop system safety. By combining a safety filter with exploration in this manner, our method provably achieves safety in a setting that allows for a zero-mean prior dynamics model, without requiring a backup controller. To the best of our knowledge, it is the first safe learning-based control method that achieves this.
Similar Papers
Safely Learning Controlled Stochastic Dynamics
Machine Learning (Stat)
Keeps robots safe while learning new tasks.
Computationally and Sample Efficient Safe Reinforcement Learning Using Adaptive Conformal Prediction
Robotics
Makes robots learn safely without crashing.
Probabilistically safe and efficient model-based reinforcement learning
Systems and Control
Makes robots learn to do dangerous jobs safely.