Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures
By: Mark Muchane , Sean Richardson , Kiho Park and more
Potential Business Impact:
Teaches computers to understand ideas in order.
Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.
Similar Papers
Projecting Assumptions: The Duality Between Sparse Autoencoders and Concept Geometry
Machine Learning (CS)
Finds hidden ideas inside computer brains.
Sparse Autoencoders, Again?
Machine Learning (CS)
Finds hidden patterns in data better.
Empirical Evaluation of Progressive Coding for Sparse Autoencoders
Machine Learning (CS)
Makes AI understand things better, faster, and cheaper.