Syntactic Blind Spots: How Misalignment Leads to LLMs Mathematical Errors
By: Dane Williamson, Yangfeng Ji, Matthew Dwyer
Potential Business Impact:
Fixes math problems by changing how they're asked.
Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities but frequently fail on problems that deviate syntactically from their training distribution. We identify a systematic failure mode, syntactic blind spots, in which models misapply familiar reasoning strategies to problems that are semantically straightforward but phrased in unfamiliar ways. These errors are not due to gaps in mathematical competence, but rather reflect a brittle coupling between surface form and internal representation. To test this, we rephrase incorrectly answered questions using syntactic templates drawn from correct examples. These rephrasings, which preserve semantics while reducing structural complexity, often lead to correct answers. We quantify syntactic complexity using a metric based on Dependency Locality Theory (DLT), and show that higher DLT scores are associated with increased failure rates across multiple datasets. Our findings suggest that many reasoning errors stem from structural misalignment rather than conceptual difficulty, and that syntax-aware interventions can reveal and mitigate these inductive failures.
Similar Papers
LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics
Computation and Language
Helps computers understand poetry and stories better.
Exploring the Potential and Limitations of Large Language Models for Novice Program Fault Localization
Software Engineering
Helps new coders find mistakes in their programs.
Distributional Semantics Tracing: A Framework for Explaining Hallucinations in Large Language Models
Computation and Language
Finds why AI makes up fake facts.