Small Vectors, Big Effects: A Mechanistic Study of RL-Induced Reasoning via Steering Vectors
By: Viacheslav Sinii , Nikita Balagansky , Yaroslav Aksenov and more
Potential Business Impact:
Teaches AI to think better by changing its words.
The mechanisms by which reasoning training reshapes language-model computations remain poorly understood. We study lightweight steering vectors inserted into the base model's residual stream and trained with a reinforcement-learning objective, which can match full fine-tuning performance while retaining the interpretability of small, additive interventions. Using logit-lens readouts, path patching, and circuit analyses, we analyze two models and find: (i) the last-layer steering vector behaves like a token-substitution bias concentrated on the first generated token, consistently boosting tokens such as "To" and "Step"; and (ii) the penultimate-layer steering vector leaves attention patterns largely unchanged and instead acts through the MLP and unembedding, preferentially up-weighting process words and structure symbols. These results establish a principled framework for interpreting the behavioral changes induced by reasoning training.
Similar Papers
Model Whisper: Steering Vectors Unlock Large Language Models' Potential in Test-time
Computation and Language
Makes smart computer programs solve new problems better.
Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations
Computation and Language
Changes AI's answers without retraining it.
Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Machine Learning (CS)
Makes AI fairer by reducing unfair ideas.