Performative Policy Gradient: Optimality in Performative Reinforcement Learning
By: Debabrota Basu , Udvas Das , Brahim Driss and more
Post-deployment machine learning algorithms often influence the environments they act in, and thus shift the underlying dynamics that the standard reinforcement learning (RL) methods ignore. While designing optimal algorithms in this performative setting has recently been studied in supervised learning, the RL counterpart remains under-explored. In this paper, we prove the performative counterparts of the performance difference lemma and the policy gradient theorem in RL, and further introduce the Performative Policy Gradient algorithm (PePG). PePG is the first policy gradient algorithm designed to account for performativity in RL. Under softmax parametrisation, and also with and without entropy regularisation, we prove that PePG converges to performatively optimal policies, i.e. policies that remain optimal under the distribution shifts induced by themselves. Thus, PePG significantly extends the prior works in Performative RL that achieves performative stability but not optimality. Furthermore, our empirical analysis on standard performative RL environments validate that PePG outperforms standard policy gradient algorithms and the existing performative RL algorithms aiming for stability.
Similar Papers
Independent Learning in Performative Markov Potential Games
Machine Learning (CS)
Makes AI agents learn better when they change the game.
On Corruption-Robustness in Performative Reinforcement Learning
Machine Learning (CS)
Makes AI learn safely even with bad information.
Reparameterization Proximal Policy Optimization
Machine Learning (CS)
Teaches robots to learn faster and more reliably.