On the Effect of Regularization in Policy Mirror Descent
By: Jan Felix Kleuker, Aske Plaat, Thomas Moerland
Potential Business Impact:
Makes computer learning more stable and reliable.
Policy Mirror Descent (PMD) has emerged as a unifying framework in reinforcement learning (RL) by linking policy gradient methods with a first-order optimization method known as mirror descent. At its core, PMD incorporates two key regularization components: (i) a distance term that enforces a trust region for stable policy updates and (ii) an MDP regularizer that augments the reward function to promote structure and robustness. While PMD has been extensively studied in theory, empirical investigations remain scarce. This work provides a large-scale empirical analysis of the interplay between these two regularization techniques, running over 500k training seeds on small RL environments. Our results demonstrate that, although the two regularizers can partially substitute each other, their precise combination is critical for achieving robust performance. These findings highlight the potential for advancing research on more robust algorithms in RL, particularly with respect to hyperparameter sensitivity.
Similar Papers
StaQ it! Growing neural networks for Policy Mirror Descent
Machine Learning (CS)
Makes computer learning more stable and predictable.
On the Convergence of Policy Mirror Descent with Temporal Difference Evaluation
Optimization and Control
Teaches computers to learn better from experience.
Convergence of Policy Mirror Descent Beyond Compatible Function Approximation
Machine Learning (CS)
Makes AI learn better in complex situations.