Efficient numeracy in language models through single-token number embeddings
By: Linus Kreitner , Paul Hager , Jonathan Mengedoht and more
Potential Business Impact:
Computers solve math problems faster and better.
To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve.
Similar Papers
FoNE: Precise Single-Token Number Embeddings via Fourier Features
Computation and Language
Makes computers understand numbers faster and better.
What is a Number, That a Large Language Model May Know It?
Computation and Language
Computers learn numbers better from text.
Unravelling the Mechanisms of Manipulating Numbers in Language Models
Computation and Language
Finds how computers make math mistakes.