Score: 0

The Origins of Representation Manifolds in Large Language Models

Published: May 23, 2025 | arXiv ID: 2505.18235v1

By: Alexander Modell, Patrick Rubin-Delanchy, Nick Whiteley

Potential Business Impact:

Helps AI understand ideas by seeing how they connect.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

There is a large ongoing scientific effort in mechanistic interpretability to map embeddings and internal representations of AI systems into human-understandable concepts. A key element of this effort is the linear representation hypothesis, which posits that neural representations are sparse linear combinations of `almost-orthogonal' direction vectors, reflecting the presence or absence of different features. This model underpins the use of sparse autoencoders to recover features from representations. Moving towards a fuller model of features, in which neural representations could encode not just the presence but also a potentially continuous and multidimensional value for a feature, has been a subject of intense recent discourse. We describe why and how a feature might be represented as a manifold, demonstrating in particular that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths, potentially answering the question of how distance in representation space and relatedness in concept space could be connected. The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.

Country of Origin
🇬🇧 United Kingdom

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)