You Are What You Say: Exploiting Linguistic Content for VoicePrivacy Attacks
By: Ünal Ege Gaznepoglu , Anna Leschanowsky , Ahmad Aloradi and more
Potential Business Impact:
Makes it harder to hide who is talking.
Speaker anonymization systems hide the identity of speakers while preserving other information such as linguistic content and emotions. To evaluate their privacy benefits, attacks in the form of automatic speaker verification (ASV) systems are employed. In this study, we assess the impact of intra-speaker linguistic content similarity in the attacker training and evaluation datasets, by adapting BERT, a language model, as an ASV system. On the VoicePrivacy Attacker Challenge datasets, our method achieves a mean equal error rate (EER) of 35%, with certain speakers attaining EERs as low as 2%, based solely on the textual content of their utterances. Our explainability study reveals that the system decisions are linked to semantically similar keywords within utterances, stemming from how LibriSpeech is curated. Our study suggests reworking the VoicePrivacy datasets to ensure a fair and unbiased evaluation and challenge the reliance on global EER for privacy evaluations.
Similar Papers
What You Read Isn't What You Hear: Linguistic Sensitivity in Deepfake Speech Detection
Machine Learning (CS)
Makes fake voices fool voice detectors.
Analyzing and Improving Speaker Similarity Assessment for Speech Synthesis
Sound
Makes cloned voices sound more like real people.
VoxGuard: Evaluating User and Attribute Privacy in Speech via Membership Inference Attacks
Cryptography and Security
Protects voices from being identified or tracked.