Score: 0

Enhancing Trustworthiness with Mixed Precision: Benchmarks, Opportunities, and Challenges

Published: November 27, 2025 | arXiv ID: 2511.22483v1

By: Guanxi Lu , Hao Mark Chen , Zhiqiang Que and more

Potential Business Impact:

Makes AI safer for important jobs.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have shown promising performance across various tasks. However, their autoregressive decoding process poses significant challenges for efficient deployment on existing AI hardware. Quantization alleviates memory and compute pressure by compressing weights, activations, and KV caches to low precisions while preserving generation quality. However, existing quantization frameworks typically focus on perplexity or classification accuracy, often omitting critical trustworthiness metrics. This gap introduces risks when applying quantized LLMs to downstream high-stakes domains such as finance and healthcare. In this work, we systematically investigate the impact of quantization on four trustworthiness metrics (adversarial robustness, fairness, machine ethics, and out-of-distribution robustness) and identify the instability across compression ratios and quantization methods. Building on these observations, we develop a novel precision-ensemble voting approach that leverages predictions from mixed-precision variants of the same model and consistently improves performance by up to $5.8\%$ on trustworthiness metrics. Our results highlight the importance of considering trustworthiness when developing model compression techniques and point to research opportunities at the intersection of compression and trustworthiness for safety-critical applications.

Country of Origin
🇬🇧 United Kingdom

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)