Synthetic Audio Forensics Evaluation (SAFE) Challenge
By: Kirill Trapeznikov , Paul Cummer , Pranay Pherwani and more
Potential Business Impact:
Finds fake voices in recordings.
The increasing realism of synthetic speech generated by advanced text-to-speech (TTS) models, coupled with post-processing and laundering techniques, presents a significant challenge for audio forensic detection. In this paper, we introduce the SAFE (Synthetic Audio Forensics Evaluation) Challenge, a fully blind evaluation framework designed to benchmark detection models across progressively harder scenarios: raw synthetic speech, processed audio (e.g., compression, resampling), and laundered audio intended to evade forensic analysis. The SAFE challenge consisted of a total of 90 hours of audio and 21,000 audio samples split across 21 different real sources and 17 different TTS models and 3 tasks. We present the challenge, evaluation design and tasks, dataset details, and initial insights into the strengths and limitations of current approaches, offering a foundation for advancing synthetic audio detection research. More information is available at \href{https://stresearch.github.io/SAFE/}{https://stresearch.github.io/SAFE/}.
Similar Papers
Multilingual Dataset Integration Strategies for Robust Audio Deepfake Detection: A SAFE Challenge System
Audio and Speech Processing
Finds fake voices in recordings.
SafeSpeech: Robust and Universal Voice Protection Against Malicious Speech Synthesis
Sound
Stops fake voices from being made from your speech.
EchoFake: A Replay-Aware Dataset for Practical Speech Deepfake Detection
Audio and Speech Processing
Stops fake voices from tricking people over the phone.