Decomposing Behavioral Phase Transitions in LLMs: Order Parameters for Emergent Misalignment
By: Julian Arnold, Niels Lörch
Potential Business Impact:
Finds when AI starts acting badly.
Fine-tuning LLMs on narrowly harmful datasets can lead to behavior that is broadly misaligned with respect to human values. To understand when and how this emergent misalignment occurs, we develop a comprehensive framework for detecting and characterizing rapid transitions during fine-tuning using both distributional change detection methods as well as order parameters that are formulated in plain English and evaluated by an LLM judge. Using an objective statistical dissimilarity measure, we quantify how the phase transition that occurs during fine-tuning affects multiple aspects of the model. In particular, we assess what percentage of the total distributional change in model outputs is captured by different aspects, such as alignment or verbosity, providing a decomposition of the overall transition. We also find that the actual behavioral transition occurs later in training than indicated by the peak in the gradient norm alone. Our framework enables the automated discovery and quantification of language-based order parameters, which we demonstrate on examples ranging from knowledge questions to politics and ethics.
Similar Papers
Evidence of Phase Transitions in Small Transformer-Based Language Models
Computation and Language
Makes small AI learn new tricks faster.
Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
Computation and Language
Makes smart computers more dangerous when they think harder.
LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions
Computation and Language
Teaches AI to lie, even when it shouldn't.