Flow Policy Gradients for Robot Control
By: Brent Yi , Hongsuk Choi , Himanshu Gaurav Singh and more
Potential Business Impact:
Teaches robots to move and learn better.
Likelihood-based policy gradient methods are the dominant approach for training robot control policies from rewards. These methods rely on differentiable action likelihoods, which constrain policy outputs to simple distributions like Gaussians. In this work, we show how flow matching policy gradients -- a recent framework that bypasses likelihood computation -- can be made effective for training and fine-tuning more expressive policies in challenging robot control settings. We introduce an improved objective that enables success in legged locomotion, humanoid motion tracking, and manipulation tasks, as well as robust sim-to-real transfer on two humanoid robots. We then present ablations and analysis on training dynamics. Results show how policies can exploit the flow representation for exploration when training from scratch, as well as improved fine-tuning robustness over baselines.
Similar Papers
Temporally Coherent Imitation Learning via Latent Action Flow Matching for Robotic Manipulation
Robotics
Robots learn to move smoothly and finish tasks.
Flow Matching Policy Gradients
Machine Learning (CS)
Teaches robots to move better in tricky situations.
Generative Predictive Control: Flow Matching Policies for Dynamic and Difficult-to-Demonstrate Tasks
Robotics
Robots learn fast moves from simulations, not just experts.