A Data-Centric Approach to Generalizable Speech Deepfake Detection
By: Wen Huang, Yuchen Mao, Yanmin Qian
Achieving robust generalization in speech deepfake detection (SDD) remains a primary challenge, as models often fail to detect unseen forgery methods. While research has focused on model-centric and algorithm-centric solutions, the impact of data composition is often underexplored. This paper proposes a data-centric approach, analyzing the SDD data landscape from two practical perspectives: constructing a single dataset and aggregating multiple datasets. To address the first perspective, we conduct a large-scale empirical study to characterize the data scaling laws for SDD, quantifying the impact of source and generator diversity. To address the second, we propose the Diversity-Optimized Sampling Strategy (DOSS), a principled framework for mixing heterogeneous data with two implementations: DOSS-Select (pruning) and DOSS-Weight (re-weighting). Our experiments show that DOSS-Select outperforms the naive aggregation baseline while using only 3% of the total available data. Furthermore, our final model, trained on a 12k-hour curated data pool using the optimal DOSS-Weight strategy, achieves state-of-the-art performance, outperforming large-scale baselines with greater data and model efficiency on both public benchmarks and a new challenge set of various commercial APIs.
Similar Papers
ESDD 2026: Environmental Sound Deepfake Detection Challenge Evaluation Plan
Sound
Detects fake sounds in videos and games.
SaD: A Scenario-Aware Discriminator for Speech Enhancement
Sound
Makes noisy audio sound clear in any place.
SaD: A Scenario-Aware Discriminator for Speech Enhancement
Sound
Makes noisy voices sound clear in any place.