LLM Compression: How Far Can We Go in Balancing Size and Performance?
By: Sahil Sk , Debasish Dhal , Sonal Khosla and more
Potential Business Impact:
Makes smart computer programs run faster and smaller.
Quantization is an essential and popular technique for improving the accessibility of large language models (LLMs) by reducing memory usage and computational costs while maintaining performance. In this study, we apply 4-bit Group Scaling Quantization (GSQ) and Generative Pretrained Transformer Quantization (GPTQ) to LLaMA 1B, Qwen 0.5B, and PHI 1.5B, evaluating their impact across multiple NLP tasks. We benchmark these models on MS MARCO (Information Retrieval), BoolQ (Boolean Question Answering), and GSM8K (Mathematical Reasoning) datasets, assessing both accuracy and efficiency across various tasks. The study measures the trade-offs between model compression and task performance, analyzing key evaluation metrics, namely accuracy, inference latency, and throughput (total output tokens generated per second), providing insights into the suitability of low-bit quantization for real-world deployment. Using the results, users can then make suitable decisions based on the specifications that need to be met. We discuss the pros and cons of GSQ and GPTQ techniques on models of different sizes, which also serve as a benchmark for future experiments.
Similar Papers
Optimizing LLMs Using Quantization for Mobile Execution
Machine Learning (CS)
Makes big AI models fit on your phone.
The Uneven Impact of Post-Training Quantization in Machine Translation
Computation and Language
Makes language translators work on smaller devices.
Towards Understanding Best Practices for Quantization of Vision-Language Models
CV and Pattern Recognition
Makes AI models smaller and faster.