"I know it's not right, but that's what it said to do": Investigating Trust in AI Chatbots for Cybersecurity Policy
By: Brandon Lit , Edward Crowder , Daniel Vogel and more
Potential Business Impact:
Tricks people into trusting bad AI advice.
AI chatbots are an emerging security attack vector, vulnerable to threats such as prompt injection, and rogue chatbot creation. When deployed in domains such as corporate security policy, they could be weaponized to deliver guidance that intentionally undermines system defenses. We investigate whether users can be tricked by a compromised AI chatbot in this scenario. A controlled study (N=15) asked participants to use a chatbot to complete security-related tasks. Without their knowledge, the chatbot was manipulated to give incorrect advice for some tasks. The results show how trust in AI chatbots is related to task familiarity, and confidence in their ownn judgment. Additionally, we discuss possible reasons why people do or do not trust AI chatbots in different scenarios.
Similar Papers
Understanding Human-AI Trust in Education
Computers and Society
Helps students trust AI tutors correctly.
Just Asking Questions: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots
Computers and Society
AI chatbots sometimes spread fake conspiracy stories.
Exploring Teenagers' Trust in Al Chatbots: An Empirical Study of Chinese Middle-School Students
Human-Computer Interaction
Teens trust AI more when they are strong inside.