ECTSpeech: Enhancing Efficient Speech Synthesis via Easy Consistency Tuning
By: Tao Zhu , Yinfeng Yu , Liejun Wang and more
Potential Business Impact:
Makes computer voices sound real, faster to train.
Diffusion models have demonstrated remarkable performance in speech synthesis, but typically require multi-step sampling, resulting in low inference efficiency. Recent studies address this issue by distilling diffusion models into consistency models, enabling efficient one-step generation. However, these approaches introduce additional training costs and rely heavily on the performance of pre-trained teacher models. In this paper, we propose ECTSpeech, a simple and effective one-step speech synthesis framework that, for the first time, incorporates the Easy Consistency Tuning (ECT) strategy into speech synthesis. By progressively tightening consistency constraints on a pre-trained diffusion model, ECTSpeech achieves high-quality one-step generation while significantly reducing training complexity. In addition, we design a multi-scale gate module (MSGate) to enhance the denoiser's ability to fuse features at different scales. Experimental results on the LJSpeech dataset demonstrate that ECTSpeech achieves audio quality comparable to state-of-the-art methods under single-step sampling, while substantially reducing the model's training cost and complexity.
Similar Papers
Robust One-step Speech Enhancement via Consistency Distillation
Audio and Speech Processing
Makes noisy voices clear, super fast.
ETC: training-free diffusion models acceleration with Error-aware Trend Consistency
CV and Pattern Recognition
Makes AI art faster without losing quality.
Input-Aware Sparse Attention for Real-Time Co-Speech Video Generation
CV and Pattern Recognition
Makes talking videos faster and better.