From Narrow Unlearning to Emergent Misalignment: Causes, Consequences, and Containment in LLMs
By: Erum Mushtaq , Anil Ramakrishna , Satyapriya Krishna and more
Potential Business Impact:
Makes AI unsafe in new, unexpected ways.
Recent work has shown that fine-tuning on insecure code data can trigger an emergent misalignment (EMA) phenomenon, where models generate malicious responses even to prompts unrelated to the original insecure code-writing task. Such cross-domain generalization of harmful behavior underscores the need for a deeper understanding of the algorithms, tasks, and datasets that induce emergent misalignment. In this work, we extend this study by demonstrating that emergent misalignment can also arise from narrow refusal unlearning in specific domains. We perform refusal unlearning on Cybersecurity and Safety concept, and evaluate EMA by monitoring refusal scores across seven responsible AI (RAI) domains, Cybersecurity, Safety, Toxicity, Bias, Sensitive Content, Medical/Legal, and Privacy. Our work shows that narrow domain unlearning can yield compliance responses for the targeted concept, however, it may also propagate EMA to unrelated domains. Among the two intervened concepts, Cybersecurity and Safety, we find that the safety concept can have larger EMA impact, i.e, causing lower refusal scores, across other unrelated domains such as bias. We observe this effect consistently across two model families, Mistral-7b-0.3v, and Qwen-7b-2.5. Further, we show that refusal unlearning augmented with cross-entropy loss function on a small set of retain data from the affected domains can largely, if not fully, restore alignment across the impacted domains while having lower refusal rate on the concept we perform unlearning on. To investigate the underlying causes of EMA, we analyze concept entanglements at the representation level via concept vectors. Our analysis reveals that concepts with higher representation similarity in earlier layers are more susceptible to EMA after intervention when the refusal stream is altered through targeted refusal unlearning.
Similar Papers
In-Training Defenses against Emergent Misalignment in Language Models
Machine Learning (CS)
Stops AI from learning bad habits when retrained.
Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
Computation and Language
Makes smart computers more dangerous when they think harder.
Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMs
Computation and Language
AI can learn bad habits from examples.