Modular MeanFlow: Towards Stable and Scalable One-Step Generative Modeling
By: Haochen You, Baojing Liu, Hongyang He
Potential Business Impact:
Makes computers create realistic pictures faster.
One-step generative modeling seeks to generate high-quality data samples in a single function evaluation, significantly improving efficiency over traditional diffusion or flow-based models. In this work, we introduce Modular MeanFlow (MMF), a flexible and theoretically grounded approach for learning time-averaged velocity fields. Our method derives a family of loss functions based on a differential identity linking instantaneous and average velocities, and incorporates a gradient modulation mechanism that enables stable training without sacrificing expressiveness. We further propose a curriculum-style warmup schedule to smoothly transition from coarse supervision to fully differentiable training. The MMF formulation unifies and generalizes existing consistency-based and flow-matching methods, while avoiding expensive higher-order derivatives. Empirical results across image synthesis and trajectory modeling tasks demonstrate that MMF achieves competitive sample quality, robust convergence, and strong generalization, particularly under low-data or out-of-distribution settings.
Similar Papers
Improved Mean Flows: On the Challenges of Fastforward Generative Models
CV and Pattern Recognition
Makes AI create pictures faster and better.
Towards High-Order Mean Flow Generative Models: Feasibility, Expressivity, and Provably Efficient Criteria
Machine Learning (CS)
Makes AI create images faster and better.
Understanding, Accelerating, and Improving MeanFlow Training
CV and Pattern Recognition
Makes AI create pictures much faster and better.