Score: 2

Differentiable Reward Optimization for LLM based TTS system

Published: July 8, 2025 | arXiv ID: 2507.05911v1

By: Changfeng Gao, Zhihao Du, Shiliang Zhang

BigTech Affiliations: Alibaba

Potential Business Impact:

Makes computer voices sound more human and clear.

Business Areas:
Speech Recognition Data and Analytics, Software

This paper proposes a novel Differentiable Reward Optimization (DiffRO) method aimed at enhancing the performance of neural codec language models based text-to-speech (TTS) systems. In contrast to conventional reinforcement learning from human feedback (RLHF) approaches applied to TTS, DiffRO directly compute the rewards based on neural codec tokens, rather than relying on synthesized audio. Furthermore, we employ the Gumbel-Softmax technique to render the reward function differentiable, thereby streamlining the RLHF training process. Additionally, we introduce a multi-task reward (MTR) model which can provide feedback from different perspectives and find that it can augment the system's capability to follow instructions effectively.Experimental results indicate that DiffRO significantly improves the pronunciation accuracy of the TTS system, achieving state-of-the-art (SOTA) WER results on the seed-tts-eval benchmark. Moreover, with the integration of the MTR model, we demonstrate the ability to control emotional and quality attributes in a zero-shot manner.

Country of Origin
🇨🇳 China

Page Count
5 pages

Category
Computer Science:
Sound