Score: 1

How Quantization Shapes Bias in Large Language Models

Published: August 25, 2025 | arXiv ID: 2508.18088v1

By: Federico Marcuzzi , Xuefei Ning , Roy Schwartz and more

Potential Business Impact:

Makes AI fairer by checking how it learns.

Business Areas:
Text Analytics Data and Analytics, Software

This work presents a comprehensive evaluation of how quantization affects model bias, with particular attention to its impact on individual demographic subgroups. We focus on weight and activation quantization strategies and examine their effects across a broad range of bias types, including stereotypes, toxicity, sentiment, and fairness. We employ both probabilistic and generated text-based metrics across nine benchmarks and evaluate models varying in architecture family and reasoning ability. Our findings show that quantization has a nuanced impact on bias: while it can reduce model toxicity and does not significantly impact sentiment, it tends to slightly increase stereotypes and unfairness in generative tasks, especially under aggressive compression. These trends are generally consistent across demographic categories and model types, although their magnitude depends on the specific setting. Overall, our results highlight the importance of carefully balancing efficiency and ethical considerations when applying quantization in practice.


Page Count
35 pages

Category
Computer Science:
Computation and Language