Score: 1

Single-Stage Huffman Encoder for ML Compression

Published: January 15, 2026 | arXiv ID: 2601.10673v1

By: Aditya Agrawal , Albert Magyar , Hiteshwar Eswaraiah and more

BigTech Affiliations: Google

Potential Business Impact:

Makes AI learn faster by shrinking data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Training and serving Large Language Models (LLMs) require partitioning data across multiple accelerators, where collective operations are frequently bottlenecked by network bandwidth. Lossless compression using Huffman codes is an effective way to alleviate the issue, however, its three-stage design requiring on-the-fly frequency analysis, codebook generation and transmission of codebook along with data introduces computational, latency and data overheads which are prohibitive for latency-sensitive scenarios such as die-to-die communication. This paper proposes a single-stage Huffman encoder that eliminates these overheads by using fixed codebooks derived from the average probability distribution of previous data batches. Through our analysis of the Gemma 2B model, we demonstrate that tensors exhibit high statistical similarity across layers and shards. Using this approach we achieve compression within 0.5% of per-shard Huffman coding and within 1% of the ideal Shannon compressibility, enabling efficient on-the-fly compression.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)