Understanding, Accelerating, and Improving MeanFlow Training
By: Jin-Young Kim , Hyojun Go , Lea Bogensperger and more
Potential Business Impact:
Makes AI create pictures much faster and better.
MeanFlow promises high-quality generative modeling in few steps, by jointly learning instantaneous and average velocity fields. Yet, the underlying training dynamics remain unclear. We analyze the interaction between the two velocities and find: (i) well-established instantaneous velocity is a prerequisite for learning average velocity; (ii) learning of instantaneous velocity benefits from average velocity when the temporal gap is small, but degrades as the gap increases; and (iii) task-affinity analysis indicates that smooth learning of large-gap average velocities, essential for one-step generation, depends on the prior formation of accurate instantaneous and small-gap average velocities. Guided by these observations, we design an effective training scheme that accelerates the formation of instantaneous velocity, then shifts emphasis from short- to long-interval average velocity. Our enhanced MeanFlow training yields faster convergence and significantly better few-step generation: With the same DiT-XL backbone, our method reaches an impressive FID of 2.87 on 1-NFE ImageNet 256x256, compared to 3.43 for the conventional MeanFlow baseline. Alternatively, our method matches the performance of the MeanFlow baseline with 2.5x shorter training time, or with a smaller DiT-L backbone.
Similar Papers
Improved Mean Flows: On the Challenges of Fastforward Generative Models
CV and Pattern Recognition
Makes AI create pictures faster and better.
Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories
CV and Pattern Recognition
Makes AI create images much faster and better.
Modular MeanFlow: Towards Stable and Scalable One-Step Generative Modeling
Machine Learning (CS)
Makes computers create realistic pictures faster.