A Diffusion Model Framework for Maximum Entropy Reinforcement Learning
By: Sebastian Sanokowski, Kaustubh Patil, Alois Knoll
Potential Business Impact:
Makes robots learn tasks faster and better.
Diffusion models have achieved remarkable success in data-driven learning and in sampling from complex, unnormalized target distributions. Building on this progress, we reinterpret Maximum Entropy Reinforcement Learning (MaxEntRL) as a diffusion model-based sampling problem. We tackle this problem by minimizing the reverse Kullback-Leibler (KL) divergence between the diffusion policy and the optimal policy distribution using a tractable upper bound. By applying the policy gradient theorem to this objective, we derive a modified surrogate objective for MaxEntRL that incorporates diffusion dynamics in a principled way. This leads to simple diffusion-based variants of Soft Actor-Critic (SAC), Proximal Policy Optimization (PPO) and Wasserstein Policy Optimization (WPO), termed DiffSAC, DiffPPO and DiffWPO. All of these methods require only minor implementation changes to their base algorithm. We find that on standard continuous control benchmarks, DiffSAC, DiffPPO and DiffWPO achieve better returns and higher sample efficiency than SAC and PPO.
Similar Papers
A Diffusion Model Framework for Maximum Entropy Reinforcement Learning
Machine Learning (CS)
Makes robots learn tasks faster and better.
Data-regularized Reinforcement Learning for Diffusion Models at Scale
Machine Learning (CS)
Makes AI create better videos that people like.
Provable Maximum Entropy Manifold Exploration via Diffusion Models
Machine Learning (CS)
Helps computers invent new things, not just copy.