On the Theoretical Foundation of Sparse Dictionary Learning in Mechanistic Interpretability
By: Yiming Tang , Harshvardhan Saini , Yizhen Liao and more
As AI models achieve remarkable capabilities across diverse domains, understanding what representations they learn and how they process information has become increasingly important for both scientific progress and trustworthy deployment. Recent works in mechanistic interpretability have shown that neural networks represent meaningful concepts as directions in their representation spaces and often encode many concepts in superposition. Various sparse dictionary learning (SDL) methods, including sparse autoencoders, transcoders, and crosscoders, address this by training auxiliary models with sparsity constraints to disentangle these superposed concepts into interpretable features. These methods have demonstrated remarkable empirical success but have limited theoretical understanding. Existing theoretical work is limited to sparse autoencoders with tied-weight constraints, leaving the broader family of SDL methods without formal grounding. In this work, we develop the first unified theoretical framework considering SDL as one unified optimization problem. We demonstrate how diverse methods instantiate the theoretical framwork and provide rigorous analysis on the optimization landscape. We provide the first theoretical explanations for some empirically observed phenomena, including feature absorption, dead neurons, and the neuron resampling technique. We further design controlled experiments to validate our theoretical results.
Similar Papers
Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures
Computation and Language
Teaches computers to understand ideas in order.
Weight-sparse transformers have interpretable circuits
Machine Learning (CS)
Makes AI easier to understand by simplifying its parts.
Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders
Machine Learning (CS)
Finds hidden patterns in data using math.