Fair-GPTQ: Bias-Aware Quantization for Large Language Models
By: Irina Proskurina, Guillaume Metzler, Julien Velcin
Potential Business Impact:
Makes AI less biased when it talks.
High memory demands of generative language models have drawn attention to quantization, which reduces computational cost, memory usage, and latency by mapping model weights to lower-precision integers. Approaches such as GPTQ effectively minimize input-weight product errors during quantization; however, recent empirical studies show that they can increase biased outputs and degrade performance on fairness benchmarks, and it remains unclear which specific weights cause this issue. In this work, we draw new links between quantization and model fairness by adding explicit group-fairness constraints to the quantization objective and introduce Fair-GPTQ, the first quantization method explicitly designed to reduce unfairness in large language models. The added constraints guide the learning of the rounding operation toward less-biased text generation for protected groups. Specifically, we focus on stereotype generation involving occupational bias and discriminatory language spanning gender, race, and religion. Fair-GPTQ has minimal impact on performance, preserving at least 90% of baseline accuracy on zero-shot benchmarks, reduces unfairness relative to a half-precision model, and retains the memory and speed benefits of 4-bit quantization. We also compare the performance of Fair-GPTQ with existing debiasing methods and find that it achieves performance on par with the iterative null-space projection debiasing approach on racial-stereotype benchmarks. Overall, the results validate our theoretical solution to the quantization problem with a group-bias term, highlight its applicability for reducing group bias at quantization time in generative models, and demonstrate that our approach can further be used to analyze channel- and weight-level contributions to fairness during quantization.
Similar Papers
How Quantization Shapes Bias in Large Language Models
Computation and Language
Makes AI fairer by checking how it learns.
Explaining How Quantization Disparately Skews a Model
Machine Learning (CS)
Makes AI fairer for everyone, not just some.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.