Inference Attacks for X-Vector Speaker Anonymization
By: Luke Bauer , Wenxuan Bao , Malvika Jadhav and more
Potential Business Impact:
Keeps your voice private from sneaky listeners.
We revisit the privacy-utility tradeoff of x-vector speaker anonymization. Existing approaches quantify privacy through training complex speaker verification or identification models that are later used as attacks. Instead, we propose a novel inference attack for de-anonymization. Our attack is simple and ML-free yet we show experimentally that it outperforms existing approaches.
Similar Papers
Private kNN-VC: Interpretable Anonymization of Converted Speech
Audio and Speech Processing
Makes voices harder to recognize while keeping speech clear.
VoxGuard: Evaluating User and Attribute Privacy in Speech via Membership Inference Attacks
Cryptography and Security
Protects voices from being identified or tracked.
Towards Better Attribute Inference Vulnerability Measures
Cryptography and Security
Protects private info while keeping data useful.