EME-TTS: Unlocking the Emphasis and Emotion Link in Speech Synthesis
By: Haoxun Li , Leyuan Qu , Jiaxi Hu and more
Potential Business Impact:
Makes talking robots sound more feeling and clear.
In recent years, emotional Text-to-Speech (TTS) synthesis and emphasis-controllable speech synthesis have advanced significantly. However, their interaction remains underexplored. We propose Emphasis Meets Emotion TTS (EME-TTS), a novel framework designed to address two key research questions: (1) how to effectively utilize emphasis to enhance the expressiveness of emotional speech, and (2) how to maintain the perceptual clarity and stability of target emphasis across different emotions. EME-TTS employs weakly supervised learning with emphasis pseudo-labels and variance-based emphasis features. Additionally, the proposed Emphasis Perception Enhancement (EPE) block enhances the interaction between emotional signals and emphasis positions. Experimental results show that EME-TTS, when combined with large language models for emphasis position prediction, enables more natural emotional speech synthesis while preserving stable and distinguishable target emphasis across emotions. Synthesized samples are available on-line.
Similar Papers
Perturbation Self-Supervised Representations for Cross-Lingual Emotion TTS: Stage-Wise Modeling of Emotion and Speaker
Sound
Makes voices speak any language with any emotion.
PROEMO: Prompt-Driven Text-to-Speech Synthesis Based on Emotion and Intensity Control
Sound
Makes computer voices sound happy or sad.
EmoSteer-TTS: Fine-Grained and Training-Free Emotion-Controllable Text-to-Speech via Activation Steering
Sound
Makes computer voices sound happy or sad.