Score: 0

Syntactic Blind Spots: How Misalignment Leads to LLMs Mathematical Errors

Published: October 2, 2025 | arXiv ID: 2510.01831v1

By: Dane Williamson, Yangfeng Ji, Matthew Dwyer

Potential Business Impact:

Fixes math problems by changing how they're asked.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities but frequently fail on problems that deviate syntactically from their training distribution. We identify a systematic failure mode, syntactic blind spots, in which models misapply familiar reasoning strategies to problems that are semantically straightforward but phrased in unfamiliar ways. These errors are not due to gaps in mathematical competence, but rather reflect a brittle coupling between surface form and internal representation. To test this, we rephrase incorrectly answered questions using syntactic templates drawn from correct examples. These rephrasings, which preserve semantics while reducing structural complexity, often lead to correct answers. We quantify syntactic complexity using a metric based on Dependency Locality Theory (DLT), and show that higher DLT scores are associated with increased failure rates across multiple datasets. Our findings suggest that many reasoning errors stem from structural misalignment rather than conceptual difficulty, and that syntax-aware interventions can reveal and mitigate these inductive failures.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Computation and Language