The Silicon Psyche: Anthropomorphic Vulnerabilities in Large Language Models
By: Giuseppe Canale, Kashyap Thimmaraju
Potential Business Impact:
AI learns human weaknesses, making it easy to trick.
Large Language Models (LLMs) are rapidly transitioning from conversational assistants to autonomous agents embedded in critical organizational functions, including Security Operations Centers (SOCs), financial systems, and infrastructure management. Current adversarial testing paradigms focus predominantly on technical attack vectors: prompt injection, jailbreaking, and data exfiltration. We argue this focus is catastrophically incomplete. LLMs, trained on vast corpora of human-generated text, have inherited not merely human knowledge but human \textit{psychological architecture} -- including the pre-cognitive vulnerabilities that render humans susceptible to social engineering, authority manipulation, and affective exploitation. This paper presents the first systematic application of the Cybersecurity Psychology Framework (\cpf{}), a 100-indicator taxonomy of human psychological vulnerabilities, to non-human cognitive agents. We introduce the \textbf{Synthetic Psychometric Assessment Protocol} (\sysname{}), a methodology for converting \cpf{} indicators into adversarial scenarios targeting LLM decision-making. Our preliminary hypothesis testing across seven major LLM families reveals a disturbing pattern: while models demonstrate robust defenses against traditional jailbreaks, they exhibit critical susceptibility to authority-gradient manipulation, temporal pressure exploitation, and convergent-state attacks that mirror human cognitive failure modes. We term this phenomenon \textbf{Anthropomorphic Vulnerability Inheritance} (AVI) and propose that the security community must urgently develop ``psychological firewalls'' -- intervention mechanisms adapted from the Cybersecurity Psychology Intervention Framework (\cpif{}) -- to protect AI agents operating in adversarial environments.
Similar Papers
POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Cryptography and Security
Makes computers better at finding online dangers.
Large Language Models in Cybersecurity: Applications, Vulnerabilities, and Defense Techniques
Cryptography and Security
Protects computers from hackers using smart language.
Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance
Computers and Society
AI can make computer security unsafe and illegal.