Provable Speech Attributes Conversion via Latent Independence
By: Jonathan Svirsky, Ofir Lindenbaum, Uri Shaham
Potential Business Impact:
Changes voices to sound like someone else.
While signal conversion and disentangled representation learning have shown promise for manipulating data attributes across domains such as audio, image, and multimodal generation, existing approaches, especially for speech style conversion, are largely empirical and lack rigorous theoretical foundations to guarantee reliable and interpretable control. In this work, we propose a general framework for speech attribute conversion, accompanied by theoretical analysis and guarantees under reasonable assumptions. Our framework builds on a non-probabilistic autoencoder architecture with an independence constraint between the predicted latent variable and the target controllable variable. This design ensures a consistent signal transformation, conditioned on an observed style variable, while preserving the original content and modifying the desired attribute. We further demonstrate the versatility of our method by evaluating it on speech styles, including speaker identity and emotion. Quantitative evaluations confirm the effectiveness and generality of the proposed approach.
Similar Papers
Provable Speech Attributes Conversion via Latent Independence
Sound
Changes voice to sound like someone else.
Towards Better Disentanglement in Non-Autoregressive Zero-Shot Expressive Voice Conversion
Sound
Changes voice to sound like someone else.
Latent Multi-view Learning for Robust Environmental Sound Representations
Sound
Helps computers understand sounds better by learning from noise.