RLAIF-SPA: Optimizing LLM-based Emotional Speech Synthesis via RLAIF
By: Qing Yang , Zhenghao Liu , Junxin Wang and more
Potential Business Impact:
Makes computer voices sound happy or sad.
Text-To-Speech synthesis has achieved near-human quality in neutral speech, but emotional expressiveness remains a challenge. Existing methods often rely on costly emotion annotations or optimize indirect objectives that fail to capture the emotional expressiveness and perceptual naturalness of speech, leading to generated speech that is accurate but emotionally flat. To address these challenges, we propose the RLAIF-SPA framework, incorporating a Reinforcement Learning from AI Feedback (RLAIF) mechanism to employ Automatic Speech Recognition (ASR) and Large Language Model (LLM) techniques to respectively judge semantic accuracy and prosodic-emotional label alignment as a direct reward for emotional expressiveness and intelligibility optimization. Specifically, it leverages Prosodic Label Alignment to enhance expressive quality by jointly considering semantic accuracy and prosodic-emotional alignment along four fine-grained dimensions: Structure, Emotion, Speed, and Tone. In addition, it incorporates Semantic Accuracy Feedback to ensure the generation of clear and accurate speech. Experiments on the Libri Speech dataset show that RLAIF-SPA outperforms Chat-TTS, with a 26.1% reduction in WER, a 9.1% increase in SIM-O, and over 10% improvement in human evaluation.
Similar Papers
Optimizing Conversational Quality in Spoken Dialogue Systems with Reinforcement Learning from AI Feedback
Computation and Language
Makes chatbots talk more naturally and sound better.
Aligning Large Language Models via Fully Self-Synthetic Data
Computation and Language
Lets AI learn to be helpful by itself.
WhiSPA: Semantically and Psychologically Aligned Whisper with Self-Supervised Contrastive and Student-Teacher Learning
Audio and Speech Processing
Helps computers understand emotions in spoken words.