Semantic Structure in Large Language Model Embeddings
By: Austin C. Kozlowski, Callin Dai, Andrei Boutyline
Potential Business Impact:
Words have simple meanings inside computers.
Psychological research consistently finds that human ratings of words across diverse semantic scales can be reduced to a low-dimensional form with relatively little information loss. We find that the semantic associations encoded in the embedding matrices of large language models (LLMs) exhibit a similar structure. We show that the projections of words on semantic directions defined by antonym pairs (e.g. kind - cruel) correlate highly with human ratings, and further find that these projections effectively reduce to a 3-dimensional subspace within LLM embeddings, closely resembling the patterns derived from human survey responses. Moreover, we find that shifting tokens along one semantic direction causes off-target effects on geometrically aligned features proportional to their cosine similarity. These findings suggest that semantic features are entangled within LLMs similarly to how they are interconnected in human language, and a great deal of semantic information, despite its apparent complexity, is surprisingly low-dimensional. Furthermore, accounting for this semantic structure may prove essential for avoiding unintended consequences when steering features.
Similar Papers
The aftermath of compounds: Investigating Compounds and their Semantic Representations
Computation and Language
Helps computers understand word meanings better.
From Five Dimensions to Many: Large Language Models as Precise and Interpretable Psychological Profilers
Artificial Intelligence
Computers guess your personality from a few answers.
Word Meanings in Transformer Language Models
Computation and Language
Computers understand word meanings like people do.