LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?
By: Irdin Pekaric, Philipp Zech, Tom Mattson
Potential Business Impact:
AI helps some people make better choices.
Large Language Models (LLMs) are transforming human decision-making by acting as cognitive collaborators. Yet, this promise comes with a paradox: while LLMs can improve accuracy, they may also erode independent reasoning, promote over-reliance and homogenize decisions. In this paper, we investigate how LLMs shape human judgment in security-critical contexts. Through two exploratory focus groups (unaided and LLM-supported), we assess decision accuracy, behavioral resilience and reliance dynamics. Our findings reveal that while LLMs enhance accuracy and consistency in routine decisions, they can inadvertently reduce cognitive diversity and improve automation bias, which is especially the case among users with lower resilience. In contrast, high-resilience individuals leverage LLMs more effectively, suggesting that cognitive traits mediate AI benefit.
Similar Papers
Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
Cryptography and Security
Helps people catch computer hackers better.
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps computer security experts work faster.
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps security experts find computer problems faster.