Score: 0

Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning

Published: May 3, 2025 | arXiv ID: 2505.01822v1

By: Jifeng Hu , Sili Huang , Zhejian Yang and more

Potential Business Impact:

Helps robots learn better by understanding energy.

Business Areas:
Energy Management Energy

Conditional decision generation with diffusion models has shown powerful competitiveness in reinforcement learning (RL). Recent studies reveal the relation between energy-function-guidance diffusion models and constrained RL problems. The main challenge lies in estimating the intermediate energy, which is intractable due to the log-expectation formulation during the generation process. To address this issue, we propose the Analytic Energy-guided Policy Optimization (AEPO). Specifically, we first provide a theoretical analysis and the closed-form solution of the intermediate guidance when the diffusion model obeys the conditional Gaussian transformation. Then, we analyze the posterior Gaussian distribution in the log-expectation formulation and obtain the target estimation of the log-expectation under mild assumptions. Finally, we train an intermediate energy neural network to approach the target estimation of log-expectation formulation. We apply our method in 30+ offline RL tasks to demonstrate the effectiveness of our method. Extensive experiments illustrate that our method surpasses numerous representative baselines in D4RL offline reinforcement learning benchmarks.

Country of Origin
🇨🇳 China

Page Count
26 pages

Category
Computer Science:
Machine Learning (CS)