Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
By: Yingfan Zhou , Ester Chen , Manasa Pisipati and more
Potential Business Impact:
Builds trust in AI to spot fake media
Synthetic images, audio, and video can now be generated and edited by Artificial Intelligence (AI). In particular, the malicious use of synthetic data has raised concerns about potential harms to cybersecurity, personal privacy, and public trust. Although AI-based detection tools exist to help identify synthetic content, their limitations often lead to user mistrust and confusion between real and fake content. This study examines the role of AI performance in influencing human trust and decision making in synthetic data identification. Through an online human subject experiment involving 400 participants, we examined how varying AI performance impacts human trust and dependence on AI in deepfake detection. Our findings indicate how participants calibrate their dependence on AI based on their perceived risk and the prediction results provided by AI. These insights contribute to the development of transparent and explainable AI systems that better support everyday users in mitigating the harms of synthetic media.
Similar Papers
Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments
Human-Computer Interaction
Helps people spot fake videos and pictures.
Humans incorrectly reject confident accusatory AI judgments
Human-Computer Interaction
AI judges lies better than people, but we don't trust it.
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.