Score: 0

Source Verification for Speech Deepfakes

Published: May 20, 2025 | arXiv ID: 2505.14188v1

By: Viola Negroni , Davide Salvi , Paolo Bestagini and more

Potential Business Impact:

Finds who made fake voices.

Business Areas:
Speech Recognition Data and Analytics, Software

With the proliferation of speech deepfake generators, it becomes crucial not only to assess the authenticity of synthetic audio but also to trace its origin. While source attribution models attempt to address this challenge, they often struggle in open-set conditions against unseen generators. In this paper, we introduce the source verification task, which, inspired by speaker verification, determines whether a test track was produced using the same model as a set of reference signals. Our approach leverages embeddings from a classifier trained for source attribution, computing distance scores between tracks to assess whether they originate from the same source. We evaluate multiple models across diverse scenarios, analyzing the impact of speaker diversity, language mismatch, and post-processing operations. This work provides the first exploration of source verification, highlighting its potential and vulnerabilities, and offers insights for real-world forensic applications.

Country of Origin
🇮🇹 Italy

Page Count
5 pages

Category
Computer Science:
Sound