Towards Corpus-Grounded Agentic LLMs for Multilingual Grammatical Analysis
By: Matej Klemen , Tjaša Arčon , Luka Terčon and more
Potential Business Impact:
AI helps understand language rules in many languages.
Empirical grammar research has become increasingly data-driven, but the systematic analysis of annotated corpora still requires substantial methodological and technical effort. We explore how agentic large language models (LLMs) can streamline this process by reasoning over annotated corpora and producing interpretable, data-grounded answers to linguistic questions. We introduce an agentic framework for corpus-grounded grammatical analysis that integrates concepts such as natural-language task interpretation, code generation, and data-driven reasoning. As a proof of concept, we apply it to Universal Dependencies (UD) corpora, testing it on multilingual grammatical tasks inspired by the World Atlas of Language Structures (WALS). The evaluation spans 13 word-order features and over 170 languages, assessing system performance across three complementary dimensions - dominant-order accuracy, order-coverage completeness, and distributional fidelity - which reflect how well the system generalizes, identifies, and quantifies word-order variations. The results demonstrate the feasibility of combining LLM reasoning with structured linguistic data, offering a first step toward interpretable, scalable automation of corpus-based grammatical inquiry.
Similar Papers
LLM/Agent-as-Data-Analyst: A Survey
Artificial Intelligence
Computers understand and analyze any kind of data.
Ground Truth Generation for Multilingual Historical NLP using LLMs
Computation and Language
Helps computers understand old books and writings.
LLM/Agent-as-Data-Analyst: A Survey
Artificial Intelligence
Computers understand and analyze all kinds of data.