Score: 0

Flow Policy Gradients for Robot Control

Published: February 2, 2026 | arXiv ID: 2602.02481v1

By: Brent Yi , Hongsuk Choi , Himanshu Gaurav Singh and more

Potential Business Impact:

Teaches robots to move and learn better.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Likelihood-based policy gradient methods are the dominant approach for training robot control policies from rewards. These methods rely on differentiable action likelihoods, which constrain policy outputs to simple distributions like Gaussians. In this work, we show how flow matching policy gradients -- a recent framework that bypasses likelihood computation -- can be made effective for training and fine-tuning more expressive policies in challenging robot control settings. We introduce an improved objective that enables success in legged locomotion, humanoid motion tracking, and manipulation tasks, as well as robust sim-to-real transfer on two humanoid robots. We then present ablations and analysis on training dynamics. Results show how policies can exploit the flow representation for exploration when training from scratch, as well as improved fine-tuning robustness over baselines.

Page Count
20 pages

Category
Computer Science:
Robotics