Unravelling the Mechanisms of Manipulating Numbers in Language Models
By: Michal Štefánik , Timothee Mickus , Marek Kadlčík and more
Potential Business Impact:
Finds how computers make math mistakes.
Recent work has shown that different large language models (LLMs) converge to similar and accurate input embedding representations for numbers. These findings conflict with the documented propensity of LLMs to produce erroneous outputs when dealing with numeric information. In this work, we aim to explain this conflict by exploring how language models manipulate numbers and quantify the lower bounds of accuracy of these mechanisms. We find that despite surfacing errors, different language models learn interchangeable representations of numbers that are systematic, highly accurate and universal across their hidden states and the types of input contexts. This allows us to create universal probes for each LLM and to trace information -- including the causes of output errors -- to specific layers. Our results lay a fundamental understanding of how pre-trained LLMs manipulate numbers and outline the potential of more accurate probing techniques in addressed refinements of LLMs' architectures.
Similar Papers
Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers
Computation and Language
Fixes math mistakes in smart computer programs.
Investigating the interaction of linguistic and mathematical reasoning in language models using multilingual number puzzles
Computation and Language
Computers learn math from different number words.
Modular Arithmetic: Language Models Solve Math Digit by Digit
Computation and Language
Helps computers do math like humans.