Inductive Moment Matching
By: Linqi Zhou, Stefano Ermon, Jiaming Song
Potential Business Impact:
Creates amazing pictures super fast with less effort.
Diffusion models and Flow Matching generate high-quality samples but are slow at inference, and distilling them into few-step models often leads to instability and extensive tuning. To resolve these trade-offs, we propose Inductive Moment Matching (IMM), a new class of generative models for one- or few-step sampling with a single-stage training procedure. Unlike distillation, IMM does not require pre-training initialization and optimization of two networks; and unlike Consistency Models, IMM guarantees distribution-level convergence and remains stable under various hyperparameters and standard model architectures. IMM surpasses diffusion models on ImageNet-256x256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step FID of 1.98 on CIFAR-10 for a model trained from scratch.
Similar Papers
Ideas in Inference-time Scaling can Benefit Generative Pre-training Algorithms
Machine Learning (CS)
Makes AI understand pictures and words faster.
Longitudinal Flow Matching for Trajectory Modeling
Machine Learning (CS)
Helps predict future paths from scattered data.
Longitudinal Flow Matching for Trajectory Modeling
Machine Learning (CS)
Predicts future body changes from brain scans.