Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
By: Shahroz Tariq , Ronal Singh , Mohan Baruwal Chhetri and more
Potential Business Impact:
Helps people catch computer hackers better.
This study investigates whether large language models (LLMs) can function as intelligent collaborators to bridge expertise gaps in cybersecurity decision-making. We examine two representative tasks-phishing email detection and intrusion detection-that differ in data modality, cognitive complexity, and user familiarity. Through a controlled mixed-methods user study, n = 58 (phishing, n = 34; intrusion, n = 24), we find that human-AI collaboration improves task performance,reducing false positives in phishing detection and false negatives in intrusion detection. A learning effect is also observed when participants transition from collaboration to independent work, suggesting that LLMs can support long-term skill development. Our qualitative analysis shows that interaction dynamics-such as LLM definitiveness, explanation style, and tone-influence user trust, prompting strategies, and decision revision. Users engaged in more analytic questioning and showed greater reliance on LLM feedback in high-complexity settings. These results provide design guidance for building interpretable, adaptive, and trustworthy human-AI teaming systems, and demonstrate that LLMs can meaningfully support non-experts in reasoning through complex cybersecurity problems.
Similar Papers
LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?
Cryptography and Security
AI helps some people make better choices.
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps computer security experts work faster.
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps security experts find computer problems faster.