Score: 2

Reinforcement Learning for Flow-Matching Policies

Published: July 20, 2025 | arXiv ID: 2507.15073v1

By: Samuel Pfrommer, Yixiao Huang, Somayeh Sojoudi

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Robots learn to do tasks better than humans.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Flow-matching policies have emerged as a powerful paradigm for generalist robotics. These models are trained to imitate an action chunk, conditioned on sensor observations and textual instructions. Often, training demonstrations are generated by a suboptimal policy, such as a human operator. This work explores training flow-matching policies via reinforcement learning to surpass the original demonstration policy performance. We particularly note minimum-time control as a key application and present a simple scheme for variable-horizon flow-matching planning. We then introduce two families of approaches: a simple Reward-Weighted Flow Matching (RWFM) scheme and a Group Relative Policy Optimization (GRPO) approach with a learned reward surrogate. Our policies are trained on an illustrative suite of simulated unicycle dynamics tasks, and we show that both approaches dramatically improve upon the suboptimal demonstrator performance, with the GRPO approach in particular generally incurring between $50\%$ and $85\%$ less cost than a naive Imitation Learning Flow Matching (ILFM) approach.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)