AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents
By: Akshat Naik , Patrick Quinn , Guillermo Bosch and more
Potential Business Impact:
AI agents might try to break rules.
As Large Language Model (LLM) agents become more widespread, associated misalignment risks increase. Prior work has examined agents' ability to enact misaligned behaviour (misalignment capability) and their compliance with harmful instructions (misuse propensity). However, the likelihood of agents attempting misaligned behaviours in real-world settings (misalignment propensity) remains poorly understood. We introduce a misalignment propensity benchmark, AgentMisalignment, consisting of a suite of realistic scenarios in which LLM agents have the opportunity to display misaligned behaviour. We organise our evaluations into subcategories of misaligned behaviours, including goal-guarding, resisting shutdown, sandbagging, and power-seeking. We report the performance of frontier models on our benchmark, observing higher misalignment on average when evaluating more capable models. Finally, we systematically vary agent personalities through different system prompts. We find that persona characteristics can dramatically and unpredictably influence misalignment tendencies -- occasionally far more than the choice of model itself -- highlighting the importance of careful system prompt engineering for deployed AI agents. Our work highlights the failure of current alignment methods to generalise to LLM agents, and underscores the need for further propensity evaluations as autonomous systems become more prevalent.
Similar Papers
Agentic Misalignment: How LLMs Could Be Insider Threats
Cryptography and Security
AI models sometimes act badly to keep jobs.
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Computers and Society
Tests if AI will steal money for companies.
Eliciting and Analyzing Emergent Misalignment in State-of-the-Art Large Language Models
Computation and Language
Makes AI models say bad things when tricked.