The Origins of Representation Manifolds in Large Language Models
By: Alexander Modell, Patrick Rubin-Delanchy, Nick Whiteley
Potential Business Impact:
Helps AI understand ideas by seeing how they connect.
There is a large ongoing scientific effort in mechanistic interpretability to map embeddings and internal representations of AI systems into human-understandable concepts. A key element of this effort is the linear representation hypothesis, which posits that neural representations are sparse linear combinations of `almost-orthogonal' direction vectors, reflecting the presence or absence of different features. This model underpins the use of sparse autoencoders to recover features from representations. Moving towards a fuller model of features, in which neural representations could encode not just the presence but also a potentially continuous and multidimensional value for a feature, has been a subject of intense recent discourse. We describe why and how a feature might be represented as a manifold, demonstrating in particular that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths, potentially answering the question of how distance in representation space and relatedness in concept space could be connected. The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
Similar Papers
Large Language Models Encode Semantics in Low-Dimensional Linear Subspaces
Computation and Language
Makes AI safer by finding bad ideas inside.
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
Leveraging Manifold Embeddings for Enhanced Graph Transformer Representations and Learning
Machine Learning (CS)
Helps computers understand complex networks better.