Does Cognitive Load Affect Human Accuracy in Detecting Voice-Based Deepfakes?
By: Marcel Gohsen , Nicola Libera , Johannes Kiesel and more
Deepfake technologies are powerful tools that can be misused for malicious purposes such as spreading disinformation on social media. The effectiveness of such malicious applications depends on the ability of deepfakes to deceive their audience. Therefore, researchers have investigated human abilities to detect deepfakes in various studies. However, most of these studies were conducted with participants who focused exclusively on the detection task; hence the studies may not provide a complete picture of human abilities to detect deepfakes under realistic conditions: Social media users are exposed to cognitive load on the platform, which can impair their detection abilities. In this paper, we investigate the influence of cognitive load on human detection abilities of voice-based deepfakes in an empirical study with 30 participants. Our results suggest that low cognitive load does not generally impair detection abilities, and that the simultaneous exposure to a secondary stimulus can actually benefit people in the detection task.
Similar Papers
Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments
Human-Computer Interaction
Helps people spot fake videos and pictures.
Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
Human-Computer Interaction
Builds trust in AI to spot fake media
Can Current Detectors Catch Face-to-Voice Deepfake Attacks?
Cryptography and Security
Detects fake voices made from just a face.