Differential syntactic and semantic encoding in LLMs
By: Santiago Acevedo, Alessandro Laio, Marco Baroni
Potential Business Impact:
Teaches computers to understand sentence meaning and grammar.
We study how syntactic and semantic information is encoded in inner layer representations of Large Language Models (LLMs), focusing on the very large DeepSeek-V3. We find that, by averaging hidden-representation vectors of sentences sharing syntactic structure or meaning, we obtain vectors that capture a significant proportion of the syntactic and semantic information contained in the representations. In particular, subtracting these syntactic and semantic ``centroids'' from sentence vectors strongly affects their similarity with syntactically and semantically matched sentences, respectively, suggesting that syntax and semantics are, at least partially, linearly encoded. We also find that the cross-layer encoding profiles of syntax and semantics are different, and that the two signals can to some extent be decoupled, suggesting differential encoding of these two types of linguistic information in LLM representations.
Similar Papers
Semantic Structure in Large Language Model Embeddings
Computation and Language
Words have simple meanings inside computers.
Large Language Models Encode Semantics in Low-Dimensional Linear Subspaces
Computation and Language
Makes AI safer by finding bad ideas inside.
Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages
Computation and Language
Computers learn languages by sharing grammar rules.