Score: 2

Different types of syntactic agreement recruit the same units within large language models

Published: December 3, 2025 | arXiv ID: 2512.03676v1

By: Daria Kryvosheieva , Andrea de Varda , Evelina Fedorenko and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Models learn grammar rules like humans do.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) can reliably distinguish grammatical from ungrammatical sentences, but how grammatical knowledge is represented within the models remains an open question. We investigate whether different syntactic phenomena recruit shared or distinct components in LLMs. Using a functional localization approach inspired by cognitive neuroscience, we identify the LLM units most responsive to 67 English syntactic phenomena in seven open-weight models. These units are consistently recruited across sentences containing the phenomena and causally support the models' syntactic performance. Critically, different types of syntactic agreement (e.g., subject-verb, anaphor, determiner-noun) recruit overlapping sets of units, suggesting that agreement constitutes a meaningful functional category for LLMs. This pattern holds in English, Russian, and Chinese; and further, in a cross-lingual analysis of 57 diverse languages, structurally more similar languages share more units for subject-verb agreement. Taken together, these findings reveal that syntactic agreement-a critical marker of syntactic dependencies-constitutes a meaningful category within LLMs' representational spaces.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language