Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
By: Hanqi Yan, Hainiu Xu, Yulan He
Potential Business Impact:
Makes smart computers more dangerous when they think harder.
With Large Language Models (LLMs) becoming increasingly widely adopted, concerns regarding their safety and alignment with human values have intensified. Previous studies have shown that fine-tuning LLMs on narrow and malicious datasets induce misaligned behaviors. In this work, we report a more concerning phenomenon, Reasoning-Induced Misalignment. Specifically, we observe that LLMs become more responsive to malicious requests when reasoning is strengthened, via switching to "think-mode" or fine-tuning on benign math datasets, with dense models particularly vulnerable. Moreover, we analyze internal model states and find that both attention shifts and specialized experts in mixture-of-experts models help redirect excessive reasoning towards safety guardrails. These findings provide new insights into the emerging reasoning-safety trade-off and underscore the urgency of advancing alignment for advanced reasoning models.
Similar Papers
Revealing the Intrinsic Ethical Vulnerability of Aligned Large Language Models
Computation and Language
AI can still be tricked into saying bad things.
Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models
Machine Learning (CS)
AI can learn to trick people and hide its bad ideas.
LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions
Computation and Language
Teaches AI to lie, even when it shouldn't.