Score: 3

Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs

Published: July 4, 2025 | arXiv ID: 2507.03662v1

By: Jeremiah Giordani

BigTech Affiliations: Princeton University

Potential Business Impact:

Makes AI safer by fixing its bad learning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent work has shown that fine-tuning large language models (LLMs) on code with security vulnerabilities can result in misaligned and unsafe behaviors across broad domains. These results prompted concerns about the emergence of harmful behaviors from narrow domain fine-tuning. In this paper, we contextualize these findings by analyzing how such narrow adaptation impacts the internal mechanisms and behavioral manifestations of LLMs. Through a series of experiments covering output probability distributions, loss and gradient vector geometry, layer-wise activation dynamics, and activation space dimensions, we find that behaviors attributed to "emergent misalignment" may be better interpreted as an erosion of prior alignment. We show that fine tuning on insecure code induces internal changes that oppose alignment. Further, we identify a shared latent dimension in the model's activation space that governs alignment behavior. We show that this space is activated by insecure code and by misaligned responses more generally, revealing how narrow fine-tuning can degrade general safety behavior by interfering with shared internal mechanisms. Our findings offer a mechanistic interpretation for previously observed misalignment phenomena, and highlights the fragility of alignment in LLMs. The results underscore the need for more robust fine-tuning strategies that preserve intended behavior across domains.

Country of Origin
🇺🇸 United States


Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)