IntMeanFlow: Few-step Speech Generation with Integral Velocity Distillation
By: Wei Wang , Rong Cao , Yi Guo and more
Potential Business Impact:
Makes computer voices sound real, faster.
Flow-based generative models have greatly improved text-to-speech (TTS) synthesis quality, but inference speed remains limited by the iterative sampling process and multiple function evaluations (NFE). The recent MeanFlow model accelerates generation by modeling average velocity instead of instantaneous velocity. However, its direct application to TTS encounters challenges, including GPU memory overhead from Jacobian-vector products (JVP) and training instability due to self-bootstrap processes. To address these issues, we introduce IntMeanFlow, a framework for few-step speech generation with integral velocity distillation. By approximating average velocity with the teacher's instantaneous velocity over a temporal interval, IntMeanFlow eliminates the need for JVPs and self-bootstrap, improving stability and reducing GPU memory usage. We also propose the Optimal Step Sampling Search (O3S) algorithm, which identifies the model-specific optimal sampling steps, improving speech synthesis without additional inference overhead. Experiments show that IntMeanFlow achieves 1-NFE inference for token-to-spectrogram and 3-NFE for text-to-spectrogram tasks while maintaining high-quality synthesis. Demo samples are available at https://vvwangvv.github.io/intmeanflow.
Similar Papers
MeanFlowSE: one-step generative speech enhancement via conditional mean flow
Sound
Makes noisy voices clear in one step.
MeanFlow-Accelerated Multimodal Video-to-Audio Synthesis via One-Step Generation
Sound
Makes silent videos talk in one step.
MeanFlowSE: one-step generative speech enhancement via conditional mean flow
Sound
Cleans up noisy speech in one step.