Why Speech Deepfake Detectors Won't Generalize: The Limits of Detection in an Open World
By: Visar Berisha, Prad Kadambi, Isabella Lenz
Potential Business Impact:
Makes fake voices harder to detect by computers.
Speech deepfake detectors are often evaluated on clean, benchmark-style conditions, but deployment occurs in an open world of shifting devices, sampling rates, codecs, environments, and attack families. This creates a ``coverage debt" for AI-based detectors: every new condition multiplies with existing ones, producing data blind spots that grow faster than data can be collected. Because attackers can target these uncovered regions, worst-case performance (not average benchmark scores) determines security. To demonstrate the impact of the coverage debt problem, we analyze results from a recent cross-testing framework. Grouping performance by bona fide domain and spoof release year, two patterns emerge: newer synthesizers erase the legacy artifacts detectors rely on, and conversational speech domains (teleconferencing, interviews, social media) are consistently the hardest to secure. These findings show that detection alone should not be relied upon for high-stakes decisions. Detectors should be treated as auxiliary signals within layered defenses that include provenance, personhood credentials, and policy safeguards.
Similar Papers
What You Read Isn't What You Hear: Linguistic Sensitivity in Deepfake Speech Detection
Machine Learning (CS)
Makes fake voices fool voice detectors.
Benchmarking Fake Voice Detection in the Fake Voice Generation Arms Race
Sound
Finds fake voices that trick sound detectors.
Can Current Detectors Catch Face-to-Voice Deepfake Attacks?
Cryptography and Security
Detects fake voices made from just a face.