Investigating the interaction of linguistic and mathematical reasoning in language models using multilingual number puzzles
By: Antara Raaghavi Bhattacharya , Isabel Papadimitriou , Kathryn Davidson and more
Potential Business Impact:
Computers learn math from different number words.
Across languages, numeral systems vary widely in how they construct and combine numbers. While humans consistently learn to navigate this diversity, large language models (LLMs) struggle with linguistic-mathematical puzzles involving cross-linguistic numeral systems, which humans can learn to solve successfully. We investigate why this task is difficult for LLMs through a series of experiments that untangle the linguistic and mathematical aspects of numbers in language. Our experiments establish that models cannot consistently solve such problems unless the mathematical operations in the problems are explicitly marked using known symbols ($+$, $\times$, etc, as in "twenty + three"). In further ablation studies, we probe how individual parameters of numeral construction and combination affect performance. While humans use their linguistic understanding of numbers to make inferences about the implicit compositional structure of numerals, LLMs seem to lack this notion of implicit numeral structure. We conclude that the ability to flexibly infer compositional rules from implicit patterns in human-scale data remains an open challenge for current reasoning models.
Similar Papers
Large Language Models in Numberland: A Quick Test of Their Numerical Reasoning Abilities
Artificial Intelligence
Tests computers' math skills, finding they struggle with numbers.
Unravelling the Mechanisms of Manipulating Numbers in Language Models
Computation and Language
Finds how computers make math mistakes.
A Fragile Number Sense: Probing the Elemental Limits of Numerical Reasoning in LLMs
Machine Learning (CS)
Computers can't solve tricky math puzzles.