Multilingual Dataset Integration Strategies for Robust Audio Deepfake Detection: A SAFE Challenge System
By: Hashim Ali , Surya Subramani , Lekha Bollinani and more
Potential Business Impact:
Finds fake voices in recordings.
The SAFE Challenge evaluates synthetic speech detection across three tasks: unmodified audio, processed audio with compression artifacts, and laundered audio designed to evade detection. We systematically explore self-supervised learning (SSL) front-ends, training data compositions, and audio length configurations for robust deepfake detection. Our AASIST-based approach incorporates WavLM large frontend with RawBoost augmentation, trained on a multilingual dataset of 256,600 samples spanning 9 languages and over 70 TTS systems from CodecFake, MLAAD v5, SpoofCeleb, Famous Figures, and MAILABS. Through extensive experimentation with different SSL front-ends, three training data versions, and two audio lengths, we achieved second place in both Task 1 (unmodified audio detection) and Task 3 (laundered audio detection), demonstrating strong generalization and robustness.
Similar Papers
Synthetic Audio Forensics Evaluation (SAFE) Challenge
Sound
Finds fake voices in recordings.
KLASSify to Verify: Audio-Visual Deepfake Detection Using SSL-based Audio and Handcrafted Visual Features
Audio and Speech Processing
Finds fake videos by listening to the sound.
Multilingual Source Tracing of Speech Deepfakes: A First Benchmark
Audio and Speech Processing
Finds who made fake voices, even in other languages.