Reinforcement Learning for Flow-Matching Policies
By: Samuel Pfrommer, Yixiao Huang, Somayeh Sojoudi
Potential Business Impact:
Robots learn to do tasks better than humans.
Flow-matching policies have emerged as a powerful paradigm for generalist robotics. These models are trained to imitate an action chunk, conditioned on sensor observations and textual instructions. Often, training demonstrations are generated by a suboptimal policy, such as a human operator. This work explores training flow-matching policies via reinforcement learning to surpass the original demonstration policy performance. We particularly note minimum-time control as a key application and present a simple scheme for variable-horizon flow-matching planning. We then introduce two families of approaches: a simple Reward-Weighted Flow Matching (RWFM) scheme and a Group Relative Policy Optimization (GRPO) approach with a learned reward surrogate. Our policies are trained on an illustrative suite of simulated unicycle dynamics tasks, and we show that both approaches dramatically improve upon the suboptimal demonstrator performance, with the GRPO approach in particular generally incurring between $50\%$ and $85\%$ less cost than a naive Imitation Learning Flow Matching (ILFM) approach.
Similar Papers
ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning
Robotics
Teaches robots to move and grab better.
Flow-GRPO: Training Flow Matching Models via Online RL
CV and Pattern Recognition
Makes AI pictures match words perfectly.
Reinforcement Fine-Tuning of Flow-Matching Policies for Vision-Language-Action Models
Machine Learning (CS)
Teaches robots to learn new tasks by watching.