Power-of-Two (PoT) Weights in Large Language Models (LLMs)
By: Mahmoud Elgenedy
Potential Business Impact:
Makes big computer brains smaller and faster.
Complexity of Neural Networks is increasing rapidly due to the massive increase in model parameters. Specifically, in Large Language Models (LLMs), the number of model parameters has grown exponentially in the past few years, for example, from 1.5 billion parameters in GPT2 to 175 billion in GPT3. This raises a significant challenge for implementation, especially for Edge devices where memory and processing power are very limited. In this work, we investigate reducing LLM complexity with special type of quantization, power of two (PoT), for linear layers weights and transformer tables. PoT not only provides memory reduction but more importantly provides significant computational reduction through converting multiplication to bit shifting. We obtained preliminary results of PoT quantization on Nano-GPT implementation using Shakespeare dataset. We then extended results to 124-M GPT-2 model. The PoT quantization results are shown to be very promising with cross entropy loss degradation $\approx$[1.3-0.88] with number of bits range [4-6] to represent power levels.
Similar Papers
Power-of-Two Quantization-Aware-Training (PoT-QAT) in Large Language Models (LLMs)
Computation and Language
Makes big AI models fit on small devices.
PoTPTQ: A Two-step Power-of-Two Post-training for LLMs
Computation and Language
Makes smart computer programs run much faster.
Binary Neural Networks for Large Language Model: A Survey
Computation and Language
Makes AI models smaller and faster to train.