Dense-Jump Flow Matching with Non-Uniform Time Scheduling for Robotic Policies: Mitigating Multi-Step Inference Degradation
By: Zidong Chen , Zihao Guo , Peng Wang and more
Potential Business Impact:
Robots learn better by changing how they practice.
Flow matching has emerged as a competitive framework for learning high-quality generative policies in robotics; however, we find that generalisation arises and saturates early along the flow trajectory, in accordance with recent findings in the literature. We further observe that increasing the number of Euler integration steps during inference counter-intuitively and universally degrades policy performance. We attribute this to (i) additional, uniformly spaced integration steps oversample the late-time region, thereby constraining actions towards the training trajectories and reducing generalisation; and (ii) the learned velocity field becoming non-Lipschitz as integration time approaches 1, causing instability. To address these issues, we propose a novel policy that utilises non-uniform time scheduling (e.g., U-shaped) during training, which emphasises both early and late temporal stages to regularise policy training, and a dense-jump integration schedule at inference, which uses a single-step integration to replace the multi-step integration beyond a jump point, to avoid unstable areas around 1. Essentially, our policy is an efficient one-step learner that still pushes forward performance through multi-step integration, yielding up to 23.7% performance gains over state-of-the-art baselines across diverse robotic tasks.
Similar Papers
Fast Flow-based Visuomotor Policies via Conditional Optimal Transport Couplings
Robotics
Robots move faster and more smoothly.
Imitation Learning Policy based on Multi-Step Consistent Integration Shortcut Model
Robotics
Teaches robots to copy actions much faster.
Iterative Refinement of Flow Policies in Probability Space for Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new skills faster.