Score: 0

Vector Arithmetic in Concept and Token Subspaces

Published: November 22, 2025 | arXiv ID: 2511.18162v1

By: Sheridan Feucht, Byron Wallace, David Bau

Potential Business Impact:

Makes AI understand word meanings and spelling better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In order to predict the next token, LLMs must represent semantic and surface-level information about the current word. Previous work identified two types of attention heads that disentangle this information: (i) Concept induction heads, which copy word meanings, and (ii) Token induction heads, which copy literal token representations (Feucht et al., 2025). We show that these heads can be used to identify subspaces of model activations that exhibit coherent semantic structure in Llama-2-7b. Specifically, when we transform hidden states using the attention weights of concept heads, we are able to more accurately perform parallelogram arithmetic (Mikolov et al., 2013) on the resulting hidden states, e.g., showing that "Athens" - "Greece" + "China" = "Beijing". This transformation allows for much higher nearest-neighbor accuracy (80%) than direct use of raw hidden states (47%). Analogously, we show that token heads allow for transformations that reveal surface-level word information in hidden states, allowing for operations like "coding" - "code" + "dance" = "dancing".

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Computation and Language