Different types of syntactic agreement recruit the same units within large language models
By: Daria Kryvosheieva , Andrea de Varda , Evelina Fedorenko and more
Potential Business Impact:
Models learn grammar rules like humans do.
Large language models (LLMs) can reliably distinguish grammatical from ungrammatical sentences, but how grammatical knowledge is represented within the models remains an open question. We investigate whether different syntactic phenomena recruit shared or distinct components in LLMs. Using a functional localization approach inspired by cognitive neuroscience, we identify the LLM units most responsive to 67 English syntactic phenomena in seven open-weight models. These units are consistently recruited across sentences containing the phenomena and causally support the models' syntactic performance. Critically, different types of syntactic agreement (e.g., subject-verb, anaphor, determiner-noun) recruit overlapping sets of units, suggesting that agreement constitutes a meaningful functional category for LLMs. This pattern holds in English, Russian, and Chinese; and further, in a cross-lingual analysis of 57 diverse languages, structurally more similar languages share more units for subject-verb agreement. Taken together, these findings reveal that syntactic agreement-a critical marker of syntactic dependencies-constitutes a meaningful category within LLMs' representational spaces.
Similar Papers
Disaggregation Reveals Hidden Training Dynamics: The Case of Agreement Attraction
Computation and Language
Makes computers learn grammar like kids do.
Syntactic Blind Spots: How Misalignment Leads to LLMs Mathematical Errors
Computation and Language
Fixes math problems by changing how they're asked.
LLMs syntactically adapt their language use to their conversational partner
Computation and Language
Computers copy how people talk to each other.