Score: 2

LAMP: Look-Ahead Mixed-Precision Inference of Large Language Models

Published: January 29, 2026 | arXiv ID: 2601.21623v1

By: Stanislav Budzinskiy , Marian Gloser , Tolunay Yilmaz and more

Potential Business Impact:

Makes AI smarter and faster using less power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Mixed-precision computations are a hallmark of the current stage of AI, driving the progress in large language models towards efficient, locally deployable solutions. This article addresses the floating-point computation of compositionally-rich functions, concentrating on transformer inference. Based on the rounding error analysis of a composition $f(g(\mathrm{x}))$, we provide an adaptive strategy that selects a small subset of components of $g(\mathrm{x})$ to be computed more accurately while all other computations can be carried out with lower accuracy. We then explain how this strategy can be applied to different compositions within a transformer and illustrate its overall effect on transformer inference. We study the effectiveness of this algorithm numerically on GPT-2 models and demonstrate that already very low recomputation rates allow for improvements of up to two orders of magnitude in accuracy.

Country of Origin
🇦🇹 Austria


Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)