MEDEQUALQA: Evaluating Biases in LLMs with Counterfactual Reasoning
By: Rajarshi Ghosh , Abhay Gupta , Hudson McBride and more
Potential Business Impact:
Checks if AI treats all patients fairly.
Large language models (LLMs) are increasingly deployed in clinical decision support, yet subtle demographic cues can influence their reasoning. Prior work has documented disparities in outputs across patient groups, but little is known about how internal reasoning shifts under controlled demographic changes. We introduce MEDEQUALQA, a counterfactual benchmark that perturbs only patient pronouns (he/him, she/her, they/them) while holding critical symptoms and conditions (CSCs) constant. Each clinical vignette is expanded into single-CSC ablations, producing three parallel datasets of approximately 23,000 items each (69,000 total). We evaluate a GPT-4.1 model and compute Semantic Textual Similarity (STS) between reasoning traces to measure stability across pronoun variants. Our results show overall high similarity (mean STS >0.80), but reveal consistent localized divergences in cited risk factors, guideline anchors, and differential ordering, even when final diagnoses remain unchanged. Our error analysis highlights certain cases in which the reasoning shifts, underscoring clinically relevant bias loci that may cascade into inequitable care. MEDEQUALQA offers a controlled diagnostic setting for auditing reasoning stability in medical AI.
Similar Papers
MediEval: A Unified Medical Benchmark for Patient-Contextual and Knowledge-Grounded Reasoning in LLMs
Computation and Language
Makes AI safer for doctors to use.
Adaptive Generation of Bias-Eliciting Questions for LLMs
Computers and Society
Finds unfairness in AI answers to real questions.
MedCaseReasoning: Evaluating and learning diagnostic reasoning from clinical case reports
Computation and Language
Helps AI doctors explain their thinking better.