Decoding Order Matters in Autoregressive Speech Synthesis
By: Minghui Zhao, Anton Ragni
Potential Business Impact:
Makes computer voices sound more natural.
Autoregressive speech synthesis often adopts a left-to-right order, yet generation order is a modelling choice. We investigate decoding order through masked diffusion framework, which progressively unmasks positions and allows arbitrary decoding orders during training and inference. By interpolating between identity and random permutations, we show that randomness in decoding order affects speech quality. We further compare fixed strategies, such as \texttt{l2r} and \texttt{r2l} with adaptive ones, such as Top-$K$, finding that fixed-order decoding, including the dominating left-to-right approach, is suboptimal, while adaptive decoding yields better performance. Finally, since masked diffusion requires discrete inputs, we quantise acoustic representations and find that even 1-bit quantisation can support reasonably high-quality speech.
Similar Papers
Masked Diffusion Models are Secretly Learned-Order Autoregressive Models
Machine Learning (CS)
Teaches computers to create ordered text better.
Speculative Decoding and Beyond: An In-Depth Survey of Techniques
Computation and Language
Makes AI faster at creating text, images, and speech.
Quantize More, Lose Less: Autoregressive Generation from Residually Quantized Speech Representations
Sound
Makes computer voices sound more real and expressive.