Sparse Concept Anchoring for Interpretable and Controllable Neural Representations
By: Sandy Fraser, Patryk Wielopolski
We introduce Sparse Concept Anchoring, a method that biases latent space to position a targeted subset of concepts while allowing others to self-organize, using only minimal supervision (labels for <0.1% of examples per anchored concept). Training combines activation normalization, a separation regularizer, and anchor or subspace regularizers that attract rare labeled examples to predefined directions or axis-aligned subspaces. The anchored geometry enables two practical interventions: reversible behavioral steering that projects out a concept's latent component at inference, and permanent removal via targeted weight ablation of anchored dimensions. Experiments on structured autoencoders show selective attenuation of targeted concepts with negligible impact on orthogonal features, and complete elimination with reconstruction error approaching theoretical bounds. Sparse Concept Anchoring therefore provides a practical pathway to interpretable, steerable behavior in learned representations.
Similar Papers
AlignSAE: Concept-Aligned Sparse Autoencoders
Machine Learning (CS)
Lets AI understand and change specific ideas easily.
Steering Large Language Model Activations in Sparse Spaces
Machine Learning (CS)
Teaches AI to follow instructions better.
A Geometric Unification of Concept Learning with Concept Cones
Artificial Intelligence
Finds hidden ideas computers learn from data.