AlignSAE: Concept-Aligned Sparse Autoencoders
By: Minglai Yang , Xinyu Guo , Mihai Surdeanu and more
Potential Business Impact:
Lets AI understand and change specific ideas easily.
Large Language Models (LLMs) encode factual knowledge within hidden parametric spaces that are difficult to inspect or control. While Sparse Autoencoders (SAEs) can decompose hidden activations into more fine-grained, interpretable features, they often struggle to reliably align these features with human-defined concepts, resulting in entangled and distributed feature representations. To address this, we introduce AlignSAE, a method that aligns SAE features with a defined ontology through a "pre-train, then post-train" curriculum. After an initial unsupervised training phase, we apply supervised post-training to bind specific concepts to dedicated latent slots while preserving the remaining capacity for general reconstruction. This separation creates an interpretable interface where specific relations can be inspected and controlled without interference from unrelated features. Empirical results demonstrate that AlignSAE enables precise causal interventions, such as reliable "concept swaps", by targeting single, semantically aligned slots.
Similar Papers
Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
Computation and Language
Makes AI talk about any topic you want.
Sparse Autoencoders are Topic Models
CV and Pattern Recognition
Finds hidden themes in pictures and words.
Evaluating Sparse Autoencoders for Monosemantic Representation
Machine Learning (CS)
Makes AI understand ideas more clearly.