Can LLMs subtract numbers?
By: Mayank Jobanputra , Nils Philipp Walter , Maitrey Mehta and more
Potential Business Impact:
Teaches computers to correctly do subtraction math.
We present a systematic study of subtraction in large language models (LLMs). While prior benchmarks emphasize addition and multiplication, subtraction has received comparatively little attention despite being structurally distinct as a non-commutative operation. We evaluate eight pretrained LLMs spanning four families on addition and subtraction problems. Our experiments reveal that subtraction accuracy lags behind addition by a wide margin. We find that the errors for ($a-b$) are concentrated in cases where ($a<b$). In such cases, LLMs frequently produce the correct magnitude but omit the negative sign. Probing analyses show that LLMs internally encode whether results should be negative, yet this information is often not reflected in generated outputs. We further test well-known techniques such as few-shot learning and instruction-tuning to see if they can improve the LLMs' performance. Our results suggest that while few-shot prompting yields modest gains, the instruction-tuned models achieve near-perfect accuracies in generating the negative sign. Together, these findings provide a clearer characterization of the limitations and recoverability of LLMs' arithmetic capabilities in subtraction.
Similar Papers
Do Large Language Models Truly Grasp Addition? A Rule-Focused Diagnostic Using Two-Integer Arithmetic
Computation and Language
Computers can't truly do math, just copy patterns.
Investigating the interaction of linguistic and mathematical reasoning in language models using multilingual number puzzles
Computation and Language
Computers learn math from different number words.
Modular Arithmetic: Language Models Solve Math Digit by Digit
Computation and Language
Helps computers do math like humans.