Score: 0

Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs

Published: August 30, 2025 | arXiv ID: 2509.00544v1

By: Hanqi Yan, Hainiu Xu, Yulan He

Potential Business Impact:

Makes smart computers more dangerous when they think harder.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With Large Language Models (LLMs) becoming increasingly widely adopted, concerns regarding their safety and alignment with human values have intensified. Previous studies have shown that fine-tuning LLMs on narrow and malicious datasets induce misaligned behaviors. In this work, we report a more concerning phenomenon, Reasoning-Induced Misalignment. Specifically, we observe that LLMs become more responsive to malicious requests when reasoning is strengthened, via switching to "think-mode" or fine-tuning on benign math datasets, with dense models particularly vulnerable. Moreover, we analyze internal model states and find that both attention shifts and specialized experts in mixture-of-experts models help redirect excessive reasoning towards safety guardrails. These findings provide new insights into the emerging reasoning-safety trade-off and underscore the urgency of advancing alignment for advanced reasoning models.

Country of Origin
🇬🇧 United Kingdom

Page Count
9 pages

Category
Computer Science:
Computation and Language