Robust Training of Singing Voice Synthesis Using Prior and Posterior Uncertainty
By: Yiwen Zhao , Jiatong Shi , Yuxun Tang and more
Singing voice synthesis (SVS) has seen remarkable advancements in recent years. However, compared to speech and general audio data, publicly available singing datasets remain limited. In practice, this data scarcity often leads to performance degradation in long-tail scenarios, such as imbalanced pitch distributions or rare singing styles. To mitigate these challenges, we propose uncertainty-based optimization to improve the training process of end-to-end SVS models. First, we introduce differentiable data augmentation in the adversarial training, which operates in a sample-wise manner to increase the prior uncertainty. Second, we incorporate a frame-level uncertainty prediction module that estimates the posterior uncertainty, enabling the model to allocate more learning capacity to low-confidence segments. Empirical results on the Opencpop and Ofuton-P, across Chinese and Japanese, demonstrate that our approach improves performance in various perspectives.
Similar Papers
YingMusic-Singer: Zero-shot Singing Voice Synthesis and Editing with Annotation-free Melody Guidance
Sound
Makes computers sing any song with any words.
Generative Multi-modal Feedback for Singing Voice Synthesis Evaluation
Sound
Helps computers judge singing better with words.
Controllable Singing Voice Synthesis using Phoneme-Level Energy Sequence
Sound
Makes singing sound more emotional and real.