From Sharpness to Better Generalization for Speech Deepfake Detection
By: Wen Huang , Xuechen Liu , Xin Wang and more
Potential Business Impact:
Makes fake voice detectors work on new voices.
Generalization remains a critical challenge in speech deepfake detection (SDD). While various approaches aim to improve robustness, generalization is typically assessed through performance metrics like equal error rate without a theoretical framework to explain model performance. This work investigates sharpness as a theoretical proxy for generalization in SDD. We analyze how sharpness responds to domain shifts and find it increases in unseen conditions, indicating higher model sensitivity. Based on this, we apply Sharpness-Aware Minimization (SAM) to reduce sharpness explicitly, leading to better and more stable performance across diverse unseen test sets. Furthermore, correlation analysis confirms a statistically significant relationship between sharpness and generalization in most test settings. These findings suggest that sharpness can serve as a theoretical indicator for generalization in SDD and that sharpness-aware training offers a promising strategy for improving robustness.
Similar Papers
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
Machine Learning (CS)
Makes computer learning models work better.
Sharpness-Aware Machine Unlearning
Machine Learning (CS)
Makes AI forget bad data without losing good data.
Sharpness-Aware Data Generation for Zero-shot Quantization
Machine Learning (CS)
Makes AI learn better without seeing real examples.