Towards Better Disentanglement in Non-Autoregressive Zero-Shot Expressive Voice Conversion
By: Seymanur Akti, Tuan Nam Nguyen, Alexander Waibel
Potential Business Impact:
Changes voice to sound like someone else.
Expressive voice conversion aims to transfer both speaker identity and expressive attributes from a target speech to a given source speech. In this work, we improve over a self-supervised, non-autoregressive framework with a conditional variational autoencoder, focusing on reducing source timbre leakage and improving linguistic-acoustic disentanglement for better style transfer. To minimize style leakage, we use multilingual discrete speech units for content representation and reinforce embeddings with augmentation-based similarity loss and mix-style layer normalization. To enhance expressivity transfer, we incorporate local F0 information via cross-attention and extract style embeddings enriched with global pitch and energy features. Experiments show our model outperforms baselines in emotion and speaker similarity, demonstrating superior style adaptation and reduced source style leakage.
Similar Papers
Voice Conversion with Diverse Intonation using Conditional Variational Auto-Encoder
Sound
Changes voices to sound like anyone, with feeling.
Provable Speech Attributes Conversion via Latent Independence
Sound
Changes voice to sound like someone else.
Online Audio-Visual Autoregressive Speaker Extraction
Audio and Speech Processing
Helps computers hear one voice in noisy rooms.