A comparative study of generative models for child voice conversion
By: Protima Nomo Sudro, Anton Ragni, Thomas Hain
Generative models are a popular choice for adult-to-adult voice conversion (VC) because of their efficient way of modelling unlabelled data. To this point their usefulness in producing children speech and in particular adult to child VC has not been investigated. For adult to child VC, four generative models are compared: diffusion model, flow based model, variational autoencoders, and generative adversarial network. Results show that although converted speech outputs produce by those models appear plausible, they exhibit insufficient similarity with the target speaker characteristics. We introduce an efficient frequency warping technique that can be applied to the output of models, and which shows significant reduction of the mismatch between adult and child. The output of all the models are evaluated using both objective and subjective measures. In particular we compare specific speaker pairing using a unique corpus collected for dubbing of children speech.
Similar Papers
Generating Novel and Realistic Speakers for Voice Conversion
Sound
Creates new voices for talking robots.
Generative Adversarial Network based Voice Conversion: Techniques, Challenges, and Recent Advancements
Sound
Changes one voice to sound like another.
LatentVoiceGrad: Nonparallel Voice Conversion with Latent Diffusion/Flow-Matching Models
Sound
Changes voices to sound like someone else.