Mechanistic Decomposition of Sentence Representations
By: Matthieu Tehenan , Vikram Natarajan , Jonathan Michala and more
Potential Business Impact:
Shows how computers understand sentences better.
Sentence embeddings are central to modern NLP and AI systems, yet little is known about their internal structure. While we can compare these embeddings using measures such as cosine similarity, the contributing features are not human-interpretable, and the content of an embedding seems untraceable, as it is masked by complex neural transformations and a final pooling operation that combines individual token embeddings. To alleviate this issue, we propose a new method to mechanistically decompose sentence embeddings into interpretable components, by using dictionary learning on token-level representations. We analyze how pooling compresses these features into sentence representations, and assess the latent features that reside in a sentence embedding. This bridges token-level mechanistic interpretability with sentence-level analysis, making for more transparent and controllable representations. In our studies, we obtain several interesting insights into the inner workings of sentence embedding spaces, for instance, that many semantic and syntactic aspects are linearly encoded in the embeddings.
Similar Papers
Static Word Embeddings for Sentence Semantic Representation
Computation and Language
Makes computers understand sentences better.
Text Simplification with Sentence Embeddings
Computation and Language
Makes hard text easy to understand.
Semantic Structure in Large Language Model Embeddings
Computation and Language
Words have simple meanings inside computers.