WaveFM: A High-Fidelity and Efficient Vocoder Based on Flow Matching
By: Tianze Luo, Xingchen Miao, Wenbo Duan
Potential Business Impact:
Makes computer voices sound more real and faster.
Flow matching offers a robust and stable approach to training diffusion models. However, directly applying flow matching to neural vocoders can result in subpar audio quality. In this work, we present WaveFM, a reparameterized flow matching model for mel-spectrogram conditioned speech synthesis, designed to enhance both sample quality and generation speed for diffusion vocoders. Since mel-spectrograms represent the energy distribution of waveforms, WaveFM adopts a mel-conditioned prior distribution instead of a standard Gaussian prior to minimize unnecessary transportation costs during synthesis. Moreover, while most diffusion vocoders rely on a single loss function, we argue that incorporating auxiliary losses, including a refined multi-resolution STFT loss, can further improve audio quality. To speed up inference without degrading sample quality significantly, we introduce a tailored consistency distillation method for WaveFM. Experiment results demonstrate that our model achieves superior performance in both quality and efficiency compared to previous diffusion vocoders, while enabling waveform generation in a single inference step.
Similar Papers
UniverSR: Unified and Versatile Audio Super-Resolution via Vocoder-Free Flow Matching
Audio and Speech Processing
Makes quiet sounds loud and clear.
Real-Time Streaming Mel Vocoding with Generative Flow Matching
Audio and Speech Processing
Makes computer voices sound more real, faster.
FourierFlow: Frequency-aware Flow Matching for Generative Turbulence Modeling
Machine Learning (CS)
Makes computer models predict messy fluid flow better.