Data-Efficient Safe Policy Improvement Using Parametric Structure
By: Kasper Engelen, Guillermo A. Pérez, Marnix Suilen
Potential Business Impact:
Makes AI learn better with less data.
Safe policy improvement (SPI) is an offline reinforcement learning problem in which a new policy that reliably outperforms the behavior policy with high confidence needs to be computed using only a dataset and the behavior policy. Markov decision processes (MDPs) are the standard formalism for modeling environments in SPI. In many applications, additional information in the form of parametric dependencies between distributions in the transition dynamics is available. We make SPI more data-efficient by leveraging these dependencies through three contributions: (1) a parametric SPI algorithm that exploits known correlations between distributions to more accurately estimate the transition dynamics using the same amount of data; (2) a preprocessing technique that prunes redundant actions from the environment through a game-based abstraction; and (3) a more advanced preprocessing technique, based on satisfiability modulo theory (SMT) solving, that can identify more actions to prune. Empirical results and an ablation study show that our techniques increase the data efficiency of SPI by multiple orders of magnitude while maintaining the same reliability guarantees.
Similar Papers
Deep SPI: Safe Policy Improvement via World Models
Machine Learning (CS)
Makes AI learn better and safer.
Promises Made, Promises Kept: Safe Pareto Improvements via Ex Post Verifiable Commitments
CS and Game Theory
Makes games fairer by letting players make promises.
Improving and Accelerating Offline RL in Large Discrete Action Spaces with Structured Policy Initialization
Machine Learning (CS)
Teaches robots to pick the best actions.