Source Verification for Speech Deepfakes
By: Viola Negroni , Davide Salvi , Paolo Bestagini and more
Potential Business Impact:
Finds who made fake voices.
With the proliferation of speech deepfake generators, it becomes crucial not only to assess the authenticity of synthetic audio but also to trace its origin. While source attribution models attempt to address this challenge, they often struggle in open-set conditions against unseen generators. In this paper, we introduce the source verification task, which, inspired by speaker verification, determines whether a test track was produced using the same model as a set of reference signals. Our approach leverages embeddings from a classifier trained for source attribution, computing distance scores between tracks to assess whether they originate from the same source. We evaluate multiple models across diverse scenarios, analyzing the impact of speaker diversity, language mismatch, and post-processing operations. This work provides the first exploration of source verification, highlighting its potential and vulnerabilities, and offers insights for real-world forensic applications.
Similar Papers
Multilingual Source Tracing of Speech Deepfakes: A First Benchmark
Audio and Speech Processing
Finds who made fake voices, even in other languages.
Forensic Similarity for Speech Deepfakes
Sound
Finds fake voices by matching sound clues.
Synthetic Speech Source Tracing using Metric Learning
Sound
Finds who made fake voices.