Online Optimization for Offline Safe Reinforcement Learning
By: Yassine Chemingui , Aryan Deshwal , Alan Fern and more
Potential Business Impact:
Teaches robots to do tasks safely and well.
We study the problem of Offline Safe Reinforcement Learning (OSRL), where the goal is to learn a reward-maximizing policy from fixed data under a cumulative cost constraint. We propose a novel OSRL approach that frames the problem as a minimax objective and solves it by combining offline RL with online optimization algorithms. We prove the approximate optimality of this approach when integrated with an approximate offline RL oracle and no-regret online optimization. We also present a practical approximation that can be combined with any offline RL algorithm, eliminating the need for offline policy evaluation. Empirical results on the DSRL benchmark demonstrate that our method reliably enforces safety constraints under stringent cost budgets, while achieving high rewards. The code is available at https://github.com/yassineCh/O3SRL.
Similar Papers
Guardian: Decoupling Exploration from Safety in Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn safely and quickly.
A Tutorial: An Intuitive Explanation of Offline Reinforcement Learning Theory
Machine Learning (CS)
Teaches computers to learn from old data.
MOORL: A Framework for Integrating Offline-Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from past mistakes.