Score: 0

State Entropy Regularization for Robust Reinforcement Learning

Published: June 8, 2025 | arXiv ID: 2506.07085v2

By: Yonatan Ashlag , Uri Koren , Mirco Mutti and more

Potential Business Impact:

Makes robots learn better from mistakes.

Business Areas:
Smart Cities Real Estate

State entropy regularization has empirically shown better exploration and sample complexity in reinforcement learning (RL). However, its theoretical guarantees have not been studied. In this paper, we show that state entropy regularization improves robustness to structured and spatially correlated perturbations. These types of variation are common in transfer learning but often overlooked by standard robust RL methods, which typically focus on small, uncorrelated changes. We provide a comprehensive characterization of these robustness properties, including formal guarantees under reward and transition uncertainty, as well as settings where the method performs poorly. Much of our analysis contrasts state entropy with the widely used policy entropy regularization, highlighting their different benefits. Finally, from a practical standpoint, we illustrate that compared with policy entropy, the robustness advantages of state entropy are more sensitive to the number of rollouts used for policy evaluation.

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)