Score: 0

Microscaling Floating Point Formats for Large Language Models

Published: October 2, 2025 | arXiv ID: 2510.01863v1

By: Marco Cococcioni, Dario Pagani, Federico Rossi

Potential Business Impact:

Makes big computer brains use less memory.

Business Areas:
DSP Hardware

The increasing computational and memory demands of large language models (LLMs) necessitate innovative approaches to optimize resource usage without compromising performance. This paper leverages microscaling floating-point formats, a novel technique designed to address these challenges by reducing the storage and computational overhead associated with numerical representations in LLMs. Unlike traditional floating-point representations that allocate a dedicated scale for each value, microscaling employs a shared scale across a block of values, enabling compact one-byte floating-point representations while maintaining an extended dynamic range. We explore the application of microscaling in the context of 8-bit floating-point formats to significantly reduce memory footprint and computational costs. We tested several configurations of microscaling floats within the GPT-2 LLM architecture, demonstrating that microscaling data formats can achieve competitive accuracy during training and inference, proving its efficacy as a resource-efficient alternative for deploying LLMs at scale. The source code is publicly available at: https://github.com/unipi-dii-compressedarith/llm.c-sve

Country of Origin
🇮🇹 Italy

Page Count
11 pages

Category
Computer Science:
Neural and Evolutionary Computing