Score: 1

Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures

Published: June 1, 2025 | arXiv ID: 2506.01197v1

By: Mark Muchane , Sean Richardson , Kiho Park and more

Potential Business Impact:

Teaches computers to understand ideas in order.

Business Areas:
Semantic Search Internet Services

Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Computation and Language