Guided Flow Policy: Learning from High-Value Actions in Offline Reinforcement Learning
By: Franki Nguimatsia Tiofack , Théotime Le Hellard , Fabian Schramm and more
Potential Business Impact:
Teaches robots to learn from past actions better.
Offline reinforcement learning often relies on behavior regularization that enforces policies to remain close to the dataset distribution. However, such approaches fail to distinguish between high-value and low-value actions in their regularization components. We introduce Guided Flow Policy (GFP), which couples a multi-step flow-matching policy with a distilled one-step actor. The actor directs the flow policy through weighted behavior cloning to focus on cloning high-value actions from the dataset rather than indiscriminately imitating all state-action pairs. In turn, the flow policy constrains the actor to remain aligned with the dataset's best transitions while maximizing the critic. This mutual guidance enables GFP to achieve state-of-the-art performance across 144 state and pixel-based tasks from the OGBench, Minari, and D4RL benchmarks, with substantial gains on suboptimal datasets and challenging tasks. Webpage: https://simple-robotics.github.io/publications/guided-flow-policy/
Similar Papers
Reinforcement Fine-Tuning of Flow-Matching Policies for Vision-Language-Action Models
Machine Learning (CS)
Teaches robots to learn new tasks by watching.
Flow-Based Policy for Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new skills faster.
Offline Reinforcement Learning with Generative Trajectory Policies
Machine Learning (CS)
Makes robots learn tasks faster and better.