What You Read Isn't What You Hear: Linguistic Sensitivity in Deepfake Speech Detection
By: Binh Nguyen , Shuji Shi , Ryan Ofman and more
Potential Business Impact:
Makes fake voices fool voice detectors.
Recent advances in text-to-speech technologies have enabled realistic voice generation, fueling audio-based deepfake attacks such as fraud and impersonation. While audio anti-spoofing systems are critical for detecting such threats, prior work has predominantly focused on acoustic-level perturbations, leaving the impact of linguistic variation largely unexplored. In this paper, we investigate the linguistic sensitivity of both open-source and commercial anti-spoofing detectors by introducing transcript-level adversarial attacks. Our extensive evaluation reveals that even minor linguistic perturbations can significantly degrade detection accuracy: attack success rates surpass 60% on several open-source detector-voice pairs, and notably one commercial detection accuracy drops from 100% on synthetic audio to just 32%. Through a comprehensive feature attribution analysis, we identify that both linguistic complexity and model-level audio embedding similarity contribute strongly to detector vulnerability. We further demonstrate the real-world risk via a case study replicating the Brad Pitt audio deepfake scam, using transcript adversarial attacks to completely bypass commercial detectors. These results highlight the need to move beyond purely acoustic defenses and account for linguistic variation in the design of robust anti-spoofing systems. All source code will be publicly available.
Similar Papers
Measuring the Robustness of Audio Deepfake Detectors
Cryptography and Security
Finds fake voices even when they are noisy.
Why Speech Deepfake Detectors Won't Generalize: The Limits of Detection in an Open World
Cryptography and Security
Makes fake voices harder to detect by computers.
Pitch Imperfect: Detecting Audio Deepfakes Through Acoustic Prosodic Analysis
Sound
Finds fake voices by listening to speech patterns.