Sparse Autoencoders are Topic Models
By: Leander Girrbach, Zeynep Akata
Potential Business Impact:
Finds hidden themes in pictures and words.
Sparse autoencoders (SAEs) are used to analyze embeddings, but their role and practical value are debated. We propose a new perspective on SAEs by demonstrating that they can be naturally understood as topic models. We extend Latent Dirichlet Allocation to embedding spaces and derive the SAE objective as a maximum a posteriori estimator under this model. This view implies SAE features are thematic components rather than steerable directions. Based on this, we introduce SAE-TM, a topic modeling framework that: (1) trains an SAE to learn reusable topic atoms, (2) interprets them as word distributions on downstream data, and (3) merges them into any number of topics without retraining. SAE-TM yields more coherent topics than strong baselines on text and image datasets while maintaining diversity. Finally, we analyze thematic structure in image datasets and trace topic changes over time in Japanese woodblock prints. Our work positions SAEs as effective tools for large-scale thematic analysis across modalities. Code and data will be released upon publication.
Similar Papers
Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit
Artificial Intelligence
Finds hidden ideas in text data.
Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
Computation and Language
Makes AI talk about any topic you want.
Sparse Autoencoders Trained on the Same Data Learn Different Features
Machine Learning (CS)
AI finds different "thinking parts" each time.