Score: 0

Decoding Order Matters in Autoregressive Speech Synthesis

Published: January 13, 2026 | arXiv ID: 2601.08450v1

By: Minghui Zhao, Anton Ragni

Potential Business Impact:

Makes computer voices sound more natural.

Business Areas:
Speech Recognition Data and Analytics, Software

Autoregressive speech synthesis often adopts a left-to-right order, yet generation order is a modelling choice. We investigate decoding order through masked diffusion framework, which progressively unmasks positions and allows arbitrary decoding orders during training and inference. By interpolating between identity and random permutations, we show that randomness in decoding order affects speech quality. We further compare fixed strategies, such as \texttt{l2r} and \texttt{r2l} with adaptive ones, such as Top-$K$, finding that fixed-order decoding, including the dominating left-to-right approach, is suboptimal, while adaptive decoding yields better performance. Finally, since masked diffusion requires discrete inputs, we quantise acoustic representations and find that even 1-bit quantisation can support reasonably high-quality speech.

Page Count
5 pages

Category
Computer Science:
Sound