Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
By: Julia Bazinska , Max Mathys , Francesco Casucci and more
Potential Business Impact:
Finds AI weaknesses to make them safer.
AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents. To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level. We apply this framework to construct the $\operatorname{b}^3$ benchmark, a security benchmark based on 194331 unique crowdsourced adversarial attacks. We then evaluate 31 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security. We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.
Similar Papers
The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
Cryptography and Security
AI can be tricked into installing computer viruses.
Risk Assessment and Security Analysis of Large Language Models
Cryptography and Security
Protects smart computer programs from bad uses.
Cross-LLM Generalization of Behavioral Backdoor Detection in AI Agent Supply Chains
Cryptography and Security
Finds hidden dangers in AI tools across different systems.