How Quantization Shapes Bias in Large Language Models
By: Federico Marcuzzi , Xuefei Ning , Roy Schwartz and more
Potential Business Impact:
Makes AI fairer by checking how it learns.
This work presents a comprehensive evaluation of how quantization affects model bias, with particular attention to its impact on individual demographic subgroups. We focus on weight and activation quantization strategies and examine their effects across a broad range of bias types, including stereotypes, toxicity, sentiment, and fairness. We employ both probabilistic and generated text-based metrics across nine benchmarks and evaluate models varying in architecture family and reasoning ability. Our findings show that quantization has a nuanced impact on bias: while it can reduce model toxicity and does not significantly impact sentiment, it tends to slightly increase stereotypes and unfairness in generative tasks, especially under aggressive compression. These trends are generally consistent across demographic categories and model types, although their magnitude depends on the specific setting. Overall, our results highlight the importance of carefully balancing efficiency and ethical considerations when applying quantization in practice.
Similar Papers
Fair-GPTQ: Bias-Aware Quantization for Large Language Models
Computation and Language
Makes AI less biased when it talks.
Explaining How Quantization Disparately Skews a Model
Machine Learning (CS)
Makes AI fairer for everyone, not just some.
Quantized Large Language Models in Biomedical Natural Language Processing: Evaluation and Recommendation
Computation and Language
Makes big AI models work on smaller computers.