Flow-Based Policy for Online Reinforcement Learning
By: Lei Lv , Yunfei Li , Yu Luo and more
Potential Business Impact:
Teaches robots to learn new skills faster.
We present \textbf{FlowRL}, a novel framework for online reinforcement learning that integrates flow-based policy representation with Wasserstein-2-regularized optimization. We argue that in addition to training signals, enhancing the expressiveness of the policy class is crucial for the performance gains in RL. Flow-based generative models offer such potential, excelling at capturing complex, multimodal action distributions. However, their direct application in online RL is challenging due to a fundamental objective mismatch: standard flow training optimizes for static data imitation, while RL requires value-based policy optimization through a dynamic buffer, leading to difficult optimization landscapes. FlowRL first models policies via a state-dependent velocity field, generating actions through deterministic ODE integration from noise. We derive a constrained policy search objective that jointly maximizes Q through the flow policy while bounding the Wasserstein-2 distance to a behavior-optimal policy implicitly derived from the replay buffer. This formulation effectively aligns the flow optimization with the RL objective, enabling efficient and value-aware policy learning despite the complexity of the policy class. Empirical evaluations on DMControl and Humanoidbench demonstrate that FlowRL achieves competitive performance in online reinforcement learning benchmarks.
Similar Papers
FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
Computational Finance
Teaches computers to make money in changing markets.
One-Step Generative Policies with Q-Learning: A Reformulation of MeanFlow
Machine Learning (CS)
Teaches robots to learn from past actions.
Flow-GRPO: Training Flow Matching Models via Online RL
CV and Pattern Recognition
Makes AI pictures match words perfectly.