Steering Large Language Model Activations in Sparse Spaces
By: Reza Bayat , Ali Rahimi-Kalahroudi , Mohammad Pezeshki and more
Potential Business Impact:
Teaches AI to follow instructions better.
A key challenge in AI alignment is guiding large language models (LLMs) to follow desired behaviors at test time. Activation steering, which modifies internal model activations during inference, offers a potential solution. However, prior work in dense activation spaces struggles with superposition, wherein multiple features become entangled, limiting interpretability and precise control. In contrast, sparse representations provide an untapped opportunity for more interpretable behavior modulation. In this work, we introduce sparse activation steering (SAS), a method that leverages sparse autoencoders (SAEs) to steer LLM behavior in sparse spaces. By isolating behavior-specific features through a contrastive prompt-pairing approach, we define a set of features that can selectively reinforce or suppress behaviors. Experiments on Gemma 2 LLMs show that SAS vectors enable nuanced behavioral modulation and finer-grained control. Furthermore, scaling SAEs improves monosemanticity of SAS vectors, suggesting more reliable and interpretable interventions.
Similar Papers
Towards LLM Guardrails via Sparse Representation Steering
Cryptography and Security
Makes AI say helpful, safe, and honest things.
Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
Computation and Language
Makes AI talk about any topic you want.
Interpreting the linear structure of vision-language model embedding spaces
CV and Pattern Recognition
Helps computers understand pictures and words together.