Score: 1

Efficient numeracy in language models through single-token number embeddings

Published: October 8, 2025 | arXiv ID: 2510.06824v1

By: Linus Kreitner , Paul Hager , Jonathan Mengedoht and more

Potential Business Impact:

Computers solve math problems faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve.

Country of Origin
šŸ‡©šŸ‡Ŗ Germany


Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)