Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders
By: Ege Erdogan, Ana Lucic
Potential Business Impact:
Finds hidden patterns in data using math.
Sparse autoencoders (SAEs) have proven useful in disentangling the opaque activations of neural networks, primarily large language models, into sets of interpretable features. However, adapting them to domains beyond language, such as scientific data with group symmetries, introduces challenges that can hinder their effectiveness. We show that incorporating such group symmetries into the SAEs yields features more useful in downstream tasks. More specifically, we train autoencoders on synthetic images and find that a single matrix can explain how their activations transform as the images are rotated. Building on this, we develop adaptively equivariant SAEs that can adapt to the base model's level of equivariance. These adaptive SAEs discover features that lead to superior probing performance compared to regular SAEs, demonstrating the value of incorporating symmetries in mechanistic interpretability tools.
Similar Papers
AlignSAE: Concept-Aligned Sparse Autoencoders
Machine Learning (CS)
Lets AI understand and change specific ideas easily.
Resurrecting the Salmon: Rethinking Mechanistic Interpretability with Domain-Specific Sparse Autoencoders
Machine Learning (CS)
Helps AI understand medical words better.
Enforcing Orderedness to Improve Feature Consistency
Machine Learning (CS)
Makes AI models' thinking more predictable and consistent.