Score: 0

Mechanistic Decomposition of Sentence Representations

Published: June 4, 2025 | arXiv ID: 2506.04373v2

By: Matthieu Tehenan , Vikram Natarajan , Jonathan Michala and more

Potential Business Impact:

Shows how computers understand sentences better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Sentence embeddings are central to modern NLP and AI systems, yet little is known about their internal structure. While we can compare these embeddings using measures such as cosine similarity, the contributing features are not human-interpretable, and the content of an embedding seems untraceable, as it is masked by complex neural transformations and a final pooling operation that combines individual token embeddings. To alleviate this issue, we propose a new method to mechanistically decompose sentence embeddings into interpretable components, by using dictionary learning on token-level representations. We analyze how pooling compresses these features into sentence representations, and assess the latent features that reside in a sentence embedding. This bridges token-level mechanistic interpretability with sentence-level analysis, making for more transparent and controllable representations. In our studies, we obtain several interesting insights into the inner workings of sentence embedding spaces, for instance, that many semantic and syntactic aspects are linearly encoded in the embeddings.

Country of Origin
🇬🇧 United Kingdom

Page Count
16 pages

Category
Computer Science:
Computation and Language