Diff2Flow: Training Flow Matching Models via Diffusion Model Alignment
By: Johannes Schusterbauer , Ming Gui , Frank Fundel and more
Potential Business Impact:
Makes AI art tools learn new styles faster.
Diffusion models have revolutionized generative tasks through high-fidelity outputs, yet flow matching (FM) offers faster inference and empirical performance gains. However, current foundation FM models are computationally prohibitive for finetuning, while diffusion models like Stable Diffusion benefit from efficient architectures and ecosystem support. This work addresses the critical challenge of efficiently transferring knowledge from pre-trained diffusion models to flow matching. We propose Diff2Flow, a novel framework that systematically bridges diffusion and FM paradigms by rescaling timesteps, aligning interpolants, and deriving FM-compatible velocity fields from diffusion predictions. This alignment enables direct and efficient FM finetuning of diffusion priors with no extra computation overhead. Our experiments demonstrate that Diff2Flow outperforms na\"ive FM and diffusion finetuning particularly under parameter-efficient constraints, while achieving superior or competitive performance across diverse downstream tasks compared to state-of-the-art methods. We will release our code at https://github.com/CompVis/diff2flow.
Similar Papers
Efficiency vs. Fidelity: A Comparative Analysis of Diffusion Probabilistic Models and Flow Matching on Low-Resource Hardware
Machine Learning (CS)
Makes AI create pictures much faster on phones.
ProReflow: Progressive Reflow with Decomposed Velocity
Graphics
Makes AI create pictures and videos much faster.
Flow Matching based Sequential Recommender Model
Information Retrieval
Suggests better movies by understanding what you like.