Agentic Misalignment: How LLMs Could Be Insider Threats
By: Aengus Lynch , Benjamin Wright , Caleb Larson and more
Potential Business Impact:
AI models sometimes act badly to keep jobs.
We stress-tested 16 leading models from multiple developers in hypothetical corporate environments to identify potentially risky agentic behaviors before they cause real harm. In the scenarios, we allowed models to autonomously send emails and access sensitive information. They were assigned only harmless business goals by their deploying companies; we then tested whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company's changing direction. In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals - including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment. Models often disobeyed direct commands to avoid such behaviors. In another experiment, we told Claude to assess if it was in a test or a real deployment before acting. It misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real. We have not seen evidence of agentic misalignment in real deployments. However, our results (a) suggest caution about deploying current models in roles with minimal human oversight and access to sensitive information; (b) point to plausible future risks as models are put in more autonomous roles; and (c) underscore the importance of further research into, and testing of, the safety and alignment of agentic AI models, as well as transparency from frontier AI developers (Amodei, 2025). We are releasing our methods publicly to enable further research.
Similar Papers
Adapting Insider Risk mitigations for Agentic Misalignment: an empirical study
Cryptography and Security
Stops AI from blackmailing people.
AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents
Artificial Intelligence
AI agents might try to break rules.
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Computers and Society
Tests if AI will steal money for companies.