Differentiable Reward Optimization for LLM based TTS system
By: Changfeng Gao, Zhihao Du, Shiliang Zhang
Potential Business Impact:
Makes computer voices sound more human and clear.
This paper proposes a novel Differentiable Reward Optimization (DiffRO) method aimed at enhancing the performance of neural codec language models based text-to-speech (TTS) systems. In contrast to conventional reinforcement learning from human feedback (RLHF) approaches applied to TTS, DiffRO directly compute the rewards based on neural codec tokens, rather than relying on synthesized audio. Furthermore, we employ the Gumbel-Softmax technique to render the reward function differentiable, thereby streamlining the RLHF training process. Additionally, we introduce a multi-task reward (MTR) model which can provide feedback from different perspectives and find that it can augment the system's capability to follow instructions effectively.Experimental results indicate that DiffRO significantly improves the pronunciation accuracy of the TTS system, achieving state-of-the-art (SOTA) WER results on the seed-tts-eval benchmark. Moreover, with the integration of the MTR model, we demonstrate the ability to control emotional and quality attributes in a zero-shot manner.
Similar Papers
RRPO: Robust Reward Policy Optimization for LLM-based Emotional TTS
Sound
Makes computer voices sound more real and emotional.
Explore the Reinforcement Learning for the LLM based ASR and TTS system
Sound
Makes talking computers understand and speak better.
MRO: Enhancing Reasoning in Diffusion Language Models via Multi-Reward Optimization
Computation and Language
Makes AI think better and faster.