Score: 1

One-Step Flow Policy Mirror Descent

Published: July 31, 2025 | arXiv ID: 2507.23675v1

By: Tianyi Chen , Haitong Ma , Na Li and more

Potential Business Impact:

Makes robots learn and act much faster.

Business Areas:
Simulation Software

Diffusion policies have achieved great success in online reinforcement learning (RL) due to their strong expressive capacity. However, the inference of diffusion policy models relies on a slow iterative sampling process, which limits their responsiveness. To overcome this limitation, we propose Flow Policy Mirror Descent (FPMD), an online RL algorithm that enables 1-step sampling during policy inference. Our approach exploits a theoretical connection between the distribution variance and the discretization error of single-step sampling in straight interpolation flow matching models, and requires no extra distillation or consistency training. We present two algorithm variants based on flow policy and MeanFlow policy parametrizations, respectively. Extensive empirical evaluations on MuJoCo benchmarks demonstrate that our algorithms show strong performance comparable to diffusion policy baselines while requiring hundreds of times fewer function evaluations during inference.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)