Value-Aware Numerical Representations for Transformer Language Models
By: Andreea Dutulescu, Stefan Ruseti, Mihai Dascalu
Transformer-based language models often achieve strong results on mathematical reasoning benchmarks while remaining fragile on basic numerical understanding and arithmetic operations. A central limitation is that numbers are processed as symbolic tokens whose embeddings do not explicitly encode numerical value, leading to systematic errors. We introduce a value-aware numerical representation that augments standard tokenized inputs with a dedicated prefix token whose embedding is explicitly conditioned on the underlying numerical value. This mechanism injects magnitude information directly into the model's input space while remaining compatible with existing tokenizers and decoder-only Transformer architectures. Evaluation on arithmetic tasks shows that the proposed approach outperforms baselines across numerical formats, tasks, and operand lengths. These results indicate that explicitly encoding numerical value is an effective and efficient way to improve fundamental numerical robustness in language models.
Similar Papers
Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers
Computation and Language
Fixes math mistakes in smart computer programs.
Efficient numeracy in language models through single-token number embeddings
Machine Learning (CS)
Computers solve math problems faster and better.
Unravelling the Mechanisms of Manipulating Numbers in Language Models
Computation and Language
Finds how computers make math mistakes.