Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning
By: Jifeng Hu , Sili Huang , Zhejian Yang and more
Potential Business Impact:
Helps robots learn better by understanding energy.
Conditional decision generation with diffusion models has shown powerful competitiveness in reinforcement learning (RL). Recent studies reveal the relation between energy-function-guidance diffusion models and constrained RL problems. The main challenge lies in estimating the intermediate energy, which is intractable due to the log-expectation formulation during the generation process. To address this issue, we propose the Analytic Energy-guided Policy Optimization (AEPO). Specifically, we first provide a theoretical analysis and the closed-form solution of the intermediate guidance when the diffusion model obeys the conditional Gaussian transformation. Then, we analyze the posterior Gaussian distribution in the log-expectation formulation and obtain the target estimation of the log-expectation under mild assumptions. Finally, we train an intermediate energy neural network to approach the target estimation of log-expectation formulation. We apply our method in 30+ offline RL tasks to demonstrate the effectiveness of our method. Extensive experiments illustrate that our method surpasses numerous representative baselines in D4RL offline reinforcement learning benchmarks.
Similar Papers
Evolutionary Policy Optimization
Machine Learning (CS)
Teaches robots to learn faster and better.
Energy-Weighted Flow Matching for Offline Reinforcement Learning
Machine Learning (CS)
Makes AI learn better from past information.
EEPO: Exploration-Enhanced Policy Optimization via Sample-Then-Forget
Computation and Language
Helps AI learn new things by forgetting and trying again.