FlowBlending: Stage-Aware Multi-Model Sampling for Fast and High-Fidelity Video Generation
By: Jibin Song , Mingi Kwon , Jaeseok Jeong and more
Potential Business Impact:
Makes videos create faster without losing quality.
In this work, we show that the impact of model capacity varies across timesteps: it is crucial for the early and late stages but largely negligible during the intermediate stage. Accordingly, we propose FlowBlending, a stage-aware multi-model sampling strategy that employs a large model and a small model at capacity-sensitive stages and intermediate stages, respectively. We further introduce simple criteria to choose stage boundaries and provide a velocity-divergence analysis as an effective proxy for identifying capacity-sensitive regions. Across LTX-Video (2B/13B) and WAN 2.1 (1.3B/14B), FlowBlending achieves up to 1.65x faster inference with 57.35% fewer FLOPs, while maintaining the visual fidelity, temporal coherence, and semantic alignment of the large models. FlowBlending is also compatible with existing sampling-acceleration techniques, enabling up to 2x additional speedup. Project page is available at: https://jibin86.github.io/flowblending_project_page.
Similar Papers
MixFlow Training: Alleviating Exposure Bias with Slowed Interpolation Mixture
CV and Pattern Recognition
Makes AI create better, clearer pictures.
DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method
CV and Pattern Recognition
Helps self-driving cars see moving objects better.
From Navigation to Refinement: Revealing the Two-Stage Nature of Flow-based Diffusion Models through Oracle Velocity
Machine Learning (CS)
Teaches computers to create realistic pictures and videos.