SplInterp: Improving our Understanding and Training of Sparse Autoencoders
By: Jeremy Budd , Javier Ideami , Benjamin Macdowall Rynne and more
Potential Business Impact:
Makes AI understand itself better.
Sparse autoencoders (SAEs) have received considerable recent attention as tools for mechanistic interpretability, showing success at extracting interpretable features even from very large LLMs. However, this research has been largely empirical, and there have been recent doubts about the true utility of SAEs. In this work, we seek to enhance the theoretical understanding of SAEs, using the spline theory of deep learning. By situating SAEs in this framework: we discover that SAEs generalise ``$k$-means autoencoders'' to be piecewise affine, but sacrifice accuracy for interpretability vs. the optimal ``$k$-means-esque plus local principal component analysis (PCA)'' piecewise affine autoencoder. We characterise the underlying geometry of (TopK) SAEs using power diagrams. And we develop a novel proximal alternating method SGD (PAM-SGD) algorithm for training SAEs, with both solid theoretical foundations and promising empirical results in MNIST and LLM experiments, particularly in sample efficiency and (in the LLM setting) improved sparsity of codes. All code is available at: https://github.com/splInterp2025/splInterp
Similar Papers
Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Machine Learning (CS)
Makes AI brains understandable, even random ones.
Sparse Autoencoders Trained on the Same Data Learn Different Features
Machine Learning (CS)
AI finds different "thinking parts" each time.
Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
Machine Learning (CS)
Finds hidden patterns in handwritten numbers.