Score: 0

Provably Safe Reinforcement Learning using Entropy Regularizer

Published: January 13, 2026 | arXiv ID: 2601.08646v1

By: Abhijit Mazumdar, Rafal Wisniewski, Manuela L. Bujorianu

We consider the problem of learning the optimal policy for Markov decision processes with safety constraints. We formulate the problem in a reach-avoid setup. Our goal is to design online reinforcement learning algorithms that ensure safety constraints with arbitrarily high probability during the learning phase. To this end, we first propose an algorithm based on the optimism in the face of uncertainty (OFU) principle. Based on the first algorithm, we propose our main algorithm, which utilizes entropy regularization. We investigate the finite-sample analysis of both algorithms and derive their regret bounds. We demonstrate that the inclusion of entropy regularization improves the regret and drastically controls the episode-to-episode variability that is inherent in OFU-based safe RL algorithms.

Category
Computer Science:
Machine Learning (CS)