Continual Audio Deepfake Detection via Universal Adversarial Perturbation
By: Wangjie Li, Lin Li, Qingyang Hong
Potential Business Impact:
Finds fake voices without needing old examples.
The rapid advancement of speech synthesis and voice conversion technologies has raised significant security concerns in multimedia forensics. Although current detection models demonstrate impressive performance, they struggle to maintain effectiveness against constantly evolving deepfake attacks. Additionally, continually fine-tuning these models using historical training data incurs substantial computational and storage costs. To address these limitations, we propose a novel framework that incorporates Universal Adversarial Perturbation (UAP) into audio deepfake detection, enabling models to retain knowledge of historical spoofing distribution without direct access to past data. Our method integrates UAP seamlessly with pre-trained self-supervised audio models during fine-tuning. Extensive experiments validate the effectiveness of our approach, showcasing its potential as an efficient solution for continual learning in audio deepfake detection.
Similar Papers
A Novel and Practical Universal Adversarial Perturbations against Deep Reinforcement Learning based Intrusion Detection Systems
Cryptography and Security
Tricks security systems into missing bad online stuff.
Pindrop it! Audio and Visual Deepfake Countermeasures for Robust Detection and Fine Grained-Localization
CV and Pattern Recognition
Finds fake videos, even small changes.
Frustratingly Easy Zero-Day Audio DeepFake Detection via Retrieval Augmentation and Profile Matching
Sound
Finds fake voices even if they are new.