Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
By: Ananya Joshi, Celia Cintas, Skyler Speakman
Potential Business Impact:
Makes AI talk about any topic you want.
Recent work shows that Sparse Autoencoders (SAE) applied to large language model (LLM) layers have neurons corresponding to interpretable concepts. These SAE neurons can be modified to align generated outputs, but only towards pre-identified topics and with some parameter tuning. Our approach leverages the observational and modification properties of SAEs to enable alignment for any topic. This method 1) scores each SAE neuron by its semantic similarity to an alignment text and uses them to 2) modify SAE-layer-level outputs by emphasizing topic-aligned neurons. We assess the alignment capabilities of this approach on diverse public topic datasets including Amazon reviews, Medicine, and Sycophancy, across the currently available open-source LLMs and SAE pairs (GPT2 and Gemma) with multiple SAEs configurations. Experiments aligning to medical prompts reveal several benefits over fine-tuning, including increased average language acceptability (0.25 vs. 0.5), reduced training time across multiple alignment topics (333.6s vs. 62s), and acceptable inference time for many applications (+0.00092s/token). Our open-source code is available at github.com/IBM/sae-steering.
Similar Papers
AlignSAE: Concept-Aligned Sparse Autoencoders
Machine Learning (CS)
Lets AI understand and change specific ideas easily.
Sparse Autoencoders are Topic Models
CV and Pattern Recognition
Finds hidden themes in pictures and words.
Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders
Computation and Language
Makes computers speak only one language at a time.