Score: 1

Power-of-Two Quantization-Aware-Training (PoT-QAT) in Large Language Models (LLMs)

Published: January 5, 2026 | arXiv ID: 2601.02298v1

By: Mahmoud Elgenedy

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes big AI models fit on small devices.

Business Areas:
Quantum Computing Science and Engineering

In Large Language Models (LLMs), the number of parameters has grown exponentially in the past few years, e.g., from 1.5 billion parameters in GPT-2 to 175 billion in GPT-3 to possibly more than trillion in higher versions. This raises a significant challenge for implementation, especially for Edge devices. Unlike cloud computing, memory and processing power for Edge devices are very limited, which necessitates developing novel ideas to make such applications feasible. In this work, we investigate compressing weights with a special quantization that limits numbers to only power-of-two (PoT). This helps save a huge amount of memory as only exponents need to be stored, more importantly, it significantly reduces processing power by replacing costly multiplication with low cost bit shifting. To overcome performance loss due to this strict quantization, we investigate Quantization Aware Training (QAT) to enhance performance through additional training. Results on GPT-2 124M show a major enhancement for quantized PoT model after additional training, with a perplexity enhancement of 66% and BERT-Score loss to baseline GPT-2 of 1%. The memory saving is estimated to be 87.5% while the inference speed is expected to be 3-10x faster with PoT quantization versus full-precision.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
8 pages

Category
Computer Science:
Computation and Language