Emotional Text-To-Speech Based on Mutual-Information-Guided Emotion-Timbre Disentanglement
By: Jianing Yang , Sheng Li , Takahiro Shinozaki and more
Potential Business Impact:
Makes computer voices sound more real and emotional.
Current emotional Text-To-Speech (TTS) and style transfer methods rely on reference encoders to control global style or emotion vectors, but do not capture nuanced acoustic details of the reference speech. To this end, we propose a novel emotional TTS method that enables fine-grained phoneme-level emotion embedding prediction while disentangling intrinsic attributes of the reference speech. The proposed method employs a style disentanglement method to guide two feature extractors, reducing mutual information between timbre and emotion features, and effectively separating distinct style components from the reference speech. Experimental results demonstrate that our method outperforms baseline TTS systems in generating natural and emotionally rich speech. This work highlights the potential of disentangled and fine-grained representations in advancing the quality and flexibility of emotional TTS systems.
Similar Papers
EmoSteer-TTS: Fine-Grained and Training-Free Emotion-Controllable Text-to-Speech via Activation Steering
Sound
Makes computer voices sound happy or sad.
Voiced-Aware Style Extraction and Style Direction Adjustment for Expressive Text-to-Speech
Sound
Makes computer voices sound more like real people.
DMP-TTS: Disentangled multi-modal Prompting for Controllable Text-to-Speech with Chained Guidance
Sound
Makes voices sound like anyone, any way.