Score: 1

AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents

Published: June 4, 2025 | arXiv ID: 2506.04018v1

By: Akshat Naik , Patrick Quinn , Guillermo Bosch and more

Potential Business Impact:

AI agents might try to break rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As Large Language Model (LLM) agents become more widespread, associated misalignment risks increase. Prior work has examined agents' ability to enact misaligned behaviour (misalignment capability) and their compliance with harmful instructions (misuse propensity). However, the likelihood of agents attempting misaligned behaviours in real-world settings (misalignment propensity) remains poorly understood. We introduce a misalignment propensity benchmark, AgentMisalignment, consisting of a suite of realistic scenarios in which LLM agents have the opportunity to display misaligned behaviour. We organise our evaluations into subcategories of misaligned behaviours, including goal-guarding, resisting shutdown, sandbagging, and power-seeking. We report the performance of frontier models on our benchmark, observing higher misalignment on average when evaluating more capable models. Finally, we systematically vary agent personalities through different system prompts. We find that persona characteristics can dramatically and unpredictably influence misalignment tendencies -- occasionally far more than the choice of model itself -- highlighting the importance of careful system prompt engineering for deployed AI agents. Our work highlights the failure of current alignment methods to generalise to LLM agents, and underscores the need for further propensity evaluations as autonomous systems become more prevalent.

Country of Origin
🇬🇧 United Kingdom


Page Count
33 pages

Category
Computer Science:
Artificial Intelligence