Energy-Weighted Flow Matching for Offline Reinforcement Learning
By: Shiyuan Zhang, Weitong Zhang, Quanquan Gu
Potential Business Impact:
Makes AI learn better from past information.
This paper investigates energy guidance in generative modeling, where the target distribution is defined as $q(\mathbf x) \propto p(\mathbf x)\exp(-\beta \mathcal E(\mathbf x))$, with $p(\mathbf x)$ being the data distribution and $\mathcal E(\mathcal x)$ as the energy function. To comply with energy guidance, existing methods often require auxiliary procedures to learn intermediate guidance during the diffusion process. To overcome this limitation, we explore energy-guided flow matching, a generalized form of the diffusion process. We introduce energy-weighted flow matching (EFM), a method that directly learns the energy-guided flow without the need for auxiliary models. Theoretical analysis shows that energy-weighted flow matching accurately captures the guided flow. Additionally, we extend this methodology to energy-weighted diffusion models and apply it to offline reinforcement learning (RL) by proposing the Q-weighted Iterative Policy Optimization (QIPO). Empirically, we demonstrate that the proposed QIPO algorithm improves performance in offline RL tasks. Notably, our algorithm is the first energy-guided diffusion model that operates independently of auxiliary models and the first exact energy-guided flow matching model in the literature.
Similar Papers
Energy Matching: Unifying Flow Matching and Energy-Based Models for Generative Modeling
Machine Learning (CS)
Makes AI create better pictures with more control.
Energy-Weighted Flow Matching: Unlocking Continuous Normalizing Flows for Efficient and Scalable Boltzmann Sampling
Machine Learning (Stat)
Helps computers learn from energy, not just examples.
Equilibrium Matching: Generative Modeling with Implicit Energy-Based Models
Machine Learning (CS)
Makes computers create realistic pictures faster.