Score: 0

Continual Audio Deepfake Detection via Universal Adversarial Perturbation

Published: November 25, 2025 | arXiv ID: 2511.19974v1

By: Wangjie Li, Lin Li, Qingyang Hong

Potential Business Impact:

Finds fake voices without needing old examples.

Business Areas:
Speech Recognition Data and Analytics, Software

The rapid advancement of speech synthesis and voice conversion technologies has raised significant security concerns in multimedia forensics. Although current detection models demonstrate impressive performance, they struggle to maintain effectiveness against constantly evolving deepfake attacks. Additionally, continually fine-tuning these models using historical training data incurs substantial computational and storage costs. To address these limitations, we propose a novel framework that incorporates Universal Adversarial Perturbation (UAP) into audio deepfake detection, enabling models to retain knowledge of historical spoofing distribution without direct access to past data. Our method integrates UAP seamlessly with pre-trained self-supervised audio models during fine-tuning. Extensive experiments validate the effectiveness of our approach, showcasing its potential as an efficient solution for continual learning in audio deepfake detection.

Country of Origin
🇨🇳 China

Page Count
6 pages

Category
Computer Science:
Sound