MultiAPI Spoof: A Multi-API Dataset and Local-Attention Network for Speech Anti-spoofing Detection
By: Xueping Zhang , Zhenshan Zhang , Yechen Wang and more
Potential Business Impact:
Detects fake voices from many different sources.
Existing speech anti-spoofing benchmarks rely on a narrow set of public models, creating a substantial gap from real-world scenarios in which commercial systems employ diverse, often proprietary APIs. To address this issue, we introduce MultiAPI Spoof, a multi-API audio anti-spoofing dataset comprising about 230 hours of synthetic speech generated by 30 distinct APIs, including commercial services, open-source models, and online platforms. Based on this dataset, we define the API tracing task, enabling fine-grained attribution of spoofed audio to its generation source. We further propose Nes2Net-LA, a local-attention enhanced variant of Nes2Net that improves local context modeling and fine-grained spoofing feature extraction. Experiments show that Nes2Net-LA achieves state-of-the-art performance and offers superior robustness, particularly under diverse and unseen spoofing conditions. Code \footnote{https://github.com/XuepingZhang/MultiAPI-Spoof} and dataset \footnote{https://xuepingzhang.github.io/MultiAPI-Spoof-Dataset/} have released.
Similar Papers
EchoFake: A Replay-Aware Dataset for Practical Speech Deepfake Detection
Audio and Speech Processing
Stops fake voices from tricking people over the phone.
SEA-Spoof: Bridging The Gap in Multilingual Audio Deepfake Detection for South-East Asian
Sound
Finds fake voices in Southeast Asian languages.
CompSpoof: A Dataset and Joint Learning Framework for Component-Level Audio Anti-spoofing Countermeasures
Sound
Detects fake sounds hidden in real audio.