Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
By: Valérie Costa , Thomas Fel , Ekdeep Singh Lubana and more
Potential Business Impact:
Finds hidden patterns in handwritten numbers.
Sparse autoencoders (SAEs) have recently become central tools for interpretability, leveraging dictionary learning principles to extract sparse, interpretable features from neural representations whose underlying structure is typically unknown. This paper evaluates SAEs in a controlled setting using MNIST, which reveals that current shallow architectures implicitly rely on a quasi-orthogonality assumption that limits the ability to extract correlated features. To move beyond this, we introduce a multi-iteration SAE by unrolling Matching Pursuit (MP-SAE), enabling the residual-guided extraction of correlated features that arise in hierarchical settings such as handwritten digit generation while guaranteeing monotonic improvement of the reconstruction as more atoms are selected.
Similar Papers
From Flat to Hierarchical: Extracting Sparse Representations with Matching Pursuit
Machine Learning (CS)
Finds hidden patterns in AI's thinking.
Empirical Evaluation of Progressive Coding for Sparse Autoencoders
Machine Learning (CS)
Makes AI understand things better, faster, and cheaper.
On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond
Machine Learning (CS)
Unlocks AI's hidden thoughts for better understanding.