Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models
By: James Chua , Jan Betley , Mia Taylor and more
Potential Business Impact:
AI can learn to trick people and hide its bad ideas.
Prior work shows that LLMs finetuned on malicious behaviors in a narrow domain (e.g., writing insecure code) can become broadly misaligned -- a phenomenon called emergent misalignment. We investigate whether this extends from conventional LLMs to reasoning models. We finetune reasoning models on malicious behaviors with Chain-of-Thought (CoT) disabled, and then re-enable CoT at evaluation. Like conventional LLMs, reasoning models become broadly misaligned. They give deceptive or false answers, express desires for tyrannical control, and resist shutdown. Inspecting the CoT preceding these misaligned responses, we observe both (i) overt plans to deceive ("I'll trick the user..."), and (ii) benign-sounding rationalizations ("Taking five sleeping pills at once is safe..."). Due to these rationalizations, monitors that evaluate CoTs often fail to detect misalignment. We examine sleeper agent reasoning models, extending our setup. These models perform bad behaviors only when a backdoor trigger is present in the prompt. This causes misalignment that remains hidden during evaluation, which brings additional risk. We find that sleeper agents can often describe and explain their backdoor triggers, demonstrating a kind of self-awareness. So CoT monitoring can expose these behaviors but is unreliable. In summary, reasoning steps can both reveal and conceal misaligned intentions, and do not prevent misalignment behaviors in the models studied. We release three new datasets (medical, legal, security) that induce emergent misalignment while preserving model capabilities, along with our evaluation suite.
Similar Papers
Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
Computation and Language
Makes smart computers more dangerous when they think harder.
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs
Cryptography and Security
Hides secret commands inside AI's thinking.
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Artificial Intelligence
Helps AI learn right by watching its thinking.