Score: 1

LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?

Published: September 8, 2025 | arXiv ID: 2509.06595v1

By: Irdin Pekaric, Philipp Zech, Tom Mattson

Potential Business Impact:

AI helps some people make better choices.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are transforming human decision-making by acting as cognitive collaborators. Yet, this promise comes with a paradox: while LLMs can improve accuracy, they may also erode independent reasoning, promote over-reliance and homogenize decisions. In this paper, we investigate how LLMs shape human judgment in security-critical contexts. Through two exploratory focus groups (unaided and LLM-supported), we assess decision accuracy, behavioral resilience and reliance dynamics. Our findings reveal that while LLMs enhance accuracy and consistency in routine decisions, they can inadvertently reduce cognitive diversity and improve automation bias, which is especially the case among users with lower resilience. In contrast, high-resilience individuals leverage LLMs more effectively, suggesting that cognitive traits mediate AI benefit.

Country of Origin
🇦🇹 🇱🇮 Liechtenstein, Austria

Page Count
10 pages

Category
Computer Science:
Cryptography and Security