Enforcing Orderedness to Improve Feature Consistency
By: Sophie L. Wang , Alex Quach , Nithin Parsan and more
Potential Business Impact:
Makes AI models' thinking more predictable and consistent.
Sparse autoencoders (SAEs) have been widely used for interpretability of neural networks, but their learned features often vary across seeds and hyperparameter settings. We introduce Ordered Sparse Autoencoders (OSAE), which extend Matryoshka SAEs by (1) establishing a strict ordering of latent features and (2) deterministically using every feature dimension, avoiding the sampling-based approximations of prior nested SAE methods. Theoretically, we show that OSAEs resolve permutation non-identifiability in settings of sparse dictionary learning where solutions are unique (up to natural symmetries). Empirically on Gemma2-2B and Pythia-70M, we show that OSAEs can help improve consistency compared to Matryoshka baselines.
Similar Papers
Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders
Machine Learning (CS)
Finds hidden patterns in data using math.
AlignSAE: Concept-Aligned Sparse Autoencoders
Machine Learning (CS)
Lets AI understand and change specific ideas easily.
On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond
Machine Learning (CS)
Unlocks AI's hidden thoughts for better understanding.