DeFlow: Decoupling Manifold Modeling and Value Maximization for Offline Policy Extraction
By: Zhancun Mu
We present DeFlow, a decoupled offline RL framework that leverages flow matching to faithfully capture complex behavior manifolds. Optimizing generative policies is computationally prohibitive, typically necessitating backpropagation through ODE solvers. We address this by learning a lightweight refinement module within an explicit, data-derived trust region of the flow manifold, rather than sacrificing the iterative generation capability via single-step distillation. This way, we bypass solver differentiation and eliminate the need for balancing loss terms, ensuring stable improvement while fully preserving the flow's iterative expressivity. Empirically, DeFlow achieves superior performance on the challenging OGBench benchmark and demonstrates efficient offline-to-online adaptation.
Similar Papers
ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning
Robotics
Teaches robots to move and grab better.
OM2P: Offline Multi-Agent Mean-Flow Policy
Machine Learning (CS)
Teaches robots to work together faster.
One-Step Flow Policy Mirror Descent
Machine Learning (CS)
Makes robots learn and act much faster.