SEA-Spoof: Bridging The Gap in Multilingual Audio Deepfake Detection for South-East Asian
By: Jinyang Wu , Nana Hou , Zihan Pan and more
Potential Business Impact:
Finds fake voices in Southeast Asian languages.
The rapid growth of the digital economy in South-East Asia (SEA) has amplified the risks of audio deepfakes, yet current datasets cover SEA languages only sparsely, leaving models poorly equipped to handle this critical region. This omission is critical: detection models trained on high-resource languages collapse when applied to SEA, due to mismatches in synthesis quality, language-specific characteristics, and data scarcity. To close this gap, we present SEA-Spoof, the first large-scale Audio Deepfake Detection (ADD) dataset especially for SEA languages. SEA-Spoof spans 300+ hours of paired real and spoof speech across Tamil, Hindi, Thai, Indonesian, Malay, and Vietnamese. Spoof samples are generated from a diverse mix of state-of-the-art open-source and commercial systems, capturing wide variability in style and fidelity. Benchmarking state-of-the-art detection models reveals severe cross-lingual degradation, but fine-tuning on SEA-Spoof dramatically restores performance across languages and synthesis sources. These results highlight the urgent need for SEA-focused research and establish SEA-Spoof as a foundation for developing robust, cross-lingual, and fraud-resilient detection systems.
Similar Papers
BanglaFake: Constructing and Evaluating a Specialized Bengali Deepfake Audio Dataset
Sound
Helps catch fake voices in Bengali.
Multilingual Dataset Integration Strategies for Robust Audio Deepfake Detection: A SAFE Challenge System
Audio and Speech Processing
Finds fake voices in recordings.
EchoFake: A Replay-Aware Dataset for Practical Speech Deepfake Detection
Audio and Speech Processing
Stops fake voices from tricking people over the phone.