Score: 1

The Confidence Trap: Gender Bias and Predictive Certainty in LLMs

Published: January 12, 2026 | arXiv ID: 2601.07806v1

By: Ahmed Sabir, Markus Kängsepp, Rajesh Sharma

Potential Business Impact:

Finds if AI is fair to women.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The increased use of Large Language Models (LLMs) in sensitive domains leads to growing interest in how their confidence scores correspond to fairness and bias. This study examines the alignment between LLM-predicted confidence and human-annotated bias judgments. Focusing on gender bias, the research investigates probability confidence calibration in contexts involving gendered pronoun resolution. The goal is to evaluate if calibration metrics based on predicted confidence scores effectively capture fairness-related disparities in LLMs. The results show that, among the six state-of-the-art models, Gemma-2 demonstrates the worst calibration according to the gender bias benchmark. The primary contribution of this work is a fairness-aware evaluation of LLMs' confidence calibration, offering guidance for ethical deployment. In addition, we introduce a new calibration metric, Gender-ECE, designed to measure gender disparities in resolution tasks.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language