Mean Flows for One-step Generative Modeling
By: Zhengyang Geng , Mingyang Deng , Xingjian Bai and more
Potential Business Impact:
Makes computers create realistic pictures super fast.
We propose a principled and effective framework for one-step generative modeling. We introduce the notion of average velocity to characterize flow fields, in contrast to instantaneous velocity modeled by Flow Matching methods. A well-defined identity between average and instantaneous velocities is derived and used to guide neural network training. Our method, termed the MeanFlow model, is self-contained and requires no pre-training, distillation, or curriculum learning. MeanFlow demonstrates strong empirical performance: it achieves an FID of 3.43 with a single function evaluation (1-NFE) on ImageNet 256x256 trained from scratch, significantly outperforming previous state-of-the-art one-step diffusion/flow models. Our study substantially narrows the gap between one-step diffusion/flow models and their multi-step predecessors, and we hope it will motivate future research to revisit the foundations of these powerful models.
Similar Papers
Understanding, Accelerating, and Improving MeanFlow Training
CV and Pattern Recognition
Makes AI create pictures much faster and better.
SplitMeanFlow: Interval Splitting Consistency in Few-Step Generative Modeling
Machine Learning (CS)
Makes AI create speech 20 times faster.
Modular MeanFlow: Towards Stable and Scalable One-Step Generative Modeling
Machine Learning (CS)
Makes computers create realistic pictures faster.