Forensic deepfake audio detection using segmental speech features
By: Tianle Yang , Chengzhe Sun , Siwei Lyu and more
Potential Business Impact:
Finds fake voices by listening to how sounds are made.
This study explores the potential of using acoustic features of segmental speech sounds to detect deepfake audio. These features are highly interpretable because of their close relationship with human articulatory processes and are expected to be more difficult for deepfake models to replicate. The results demonstrate that certain segmental features commonly used in forensic voice comparison (FVC) are effective in identifying deep-fakes, whereas some global features provide little value. These findings underscore the need to approach audio deepfake detection using methods that are distinct from those employed in traditional FVC, and offer a new perspective on leveraging segmental features for this purpose.
Similar Papers
Forensic Similarity for Speech Deepfakes
Sound
Finds fake voices by matching sound clues.
Pitch Imperfect: Detecting Audio Deepfakes Through Acoustic Prosodic Analysis
Sound
Finds fake voices by listening to speech patterns.
Unmasking Deepfakes: Leveraging Augmentations and Features Variability for Deepfake Speech Detection
Sound
Spots fake voices even when they change.