Single-Stage Huffman Encoder for ML Compression
By: Aditya Agrawal , Albert Magyar , Hiteshwar Eswaraiah and more
Potential Business Impact:
Makes AI learn faster by shrinking data.
Training and serving Large Language Models (LLMs) require partitioning data across multiple accelerators, where collective operations are frequently bottlenecked by network bandwidth. Lossless compression using Huffman codes is an effective way to alleviate the issue, however, its three-stage design requiring on-the-fly frequency analysis, codebook generation and transmission of codebook along with data introduces computational, latency and data overheads which are prohibitive for latency-sensitive scenarios such as die-to-die communication. This paper proposes a single-stage Huffman encoder that eliminates these overheads by using fixed codebooks derived from the average probability distribution of previous data batches. Through our analysis of the Gemma 2B model, we demonstrate that tensors exhibit high statistical similarity across layers and shards. Using this approach we achieve compression within 0.5% of per-shard Huffman coding and within 1% of the ideal Shannon compressibility, enabling efficient on-the-fly compression.
Similar Papers
Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
Machine Learning (CS)
Makes big computer brains fit on small devices.
Ecco: Improving Memory Bandwidth and Capacity for LLMs via Entropy-aware Cache Compression
Hardware Architecture
Makes smart computers run faster and use less memory.
Heterogeneous Low-Bandwidth Pre-Training of LLMs
Machine Learning (CS)
Trains big AI models with less internet.