Do Large Language Models Grasp The Grammar? Evidence from Grammar-Book-Guided Probing in Luxembourgish
By: Lujun Li , Yewei Song , Lama Sleem and more
Potential Business Impact:
Tests if computers truly understand language rules.
Grammar refers to the system of rules that governs the structural organization and the semantic relations among linguistic units such as sentences, phrases, and words within a given language. In natural language processing, there remains a notable scarcity of grammar focused evaluation protocols, a gap that is even more pronounced for low-resource languages. Moreover, the extent to which large language models genuinely comprehend grammatical structure, especially the mapping between syntactic structures and meanings, remains under debate. To investigate this issue, we propose a Grammar Book Guided evaluation pipeline intended to provide a systematic and generalizable framework for grammar evaluation consisting of four key stages, and in this work we take Luxembourgish as a case study. The results show a weak positive correlation between translation performance and grammatical understanding, indicating that strong translations do not necessarily imply deep grammatical competence. Larger models perform well overall due to their semantic strength but remain weak in morphology and syntax, struggling particularly with Minimal Pair tasks, while strong reasoning ability offers a promising way to enhance their grammatical understanding.
Similar Papers
Grammaticality Judgments in Humans and Language Models: Revisiting Generative Grammar with LLMs
Computation and Language
Computers learn grammar rules from reading text.
Read it in Two Steps: Translating Extremely Low-Resource Languages with Code-Augmented Grammar Books
Computation and Language
Teaches computers to translate rare languages better.
Linguistic Blind Spots of Large Language Models
Computation and Language
AI struggles to understand sentence parts.