How Does Controllability Emerge In Language Models During Pretraining?
By: Jianshu She , Xinyue Li , Eric Xing and more
Potential Business Impact:
Teaches AI to control its writing style.
Language models can be steered by modifying their internal representations to control concepts such as emotion, style, or truthfulness in generation. However, the conditions for an effective intervention remain unclear and are often validated through heuristics and trial-and-error. To fill this gap, we demonstrate that intervention efficacy, measured by linear steerability (i.e., the ability to adjust output via linear transformations of hidden states), emerges during intermediate stages of training. Moreover, even closely related concepts (e.g., anger and sadness) exhibit steerability emergence at distinct stages of training. To better interpret the dynamics of steerability during training, we adapt existing intervention techniques into a unified framework, referred to as the "Intervention Detector" (ID), which is designed to reveal how linear steerability evolves over the course of training through hidden state and representation analysis. ID reveals that concepts become increasingly linearly separable in the hidden space as training progresses, which strongly correlates with the emergence of linear steerability. We further introduce ID-based metrics, such as heatmaps, entropy trends, and cosine similarity, to help interpret how linear steerability evolves throughout training. In addition, we apply ID across different model families to ensure the generality of our findings on steerability dynamics.
Similar Papers
Manipulating Transformer-Based Models: Controllability, Steerability, and Robust Interventions
Computation and Language
Lets computers write exactly what you want.
In-Distribution Steering: Balancing Control and Coherence in Language Model Generation
Computation and Language
Makes AI write better by adjusting its thinking.
Activation Steering for Bias Mitigation: An Interpretable Approach to Safer LLMs
Artificial Intelligence
Fixes AI to stop saying unfair or wrong things.