MedVoiceBias: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making
By: Zhi Rui Tam, Yun-Nung Chen
Potential Business Impact:
Voice changes how computers give medical advice.
As large language models transition from text-based interfaces to audio interactions in clinical settings, they might introduce new vulnerabilities through paralinguistic cues in audio. We evaluated these models on 170 clinical cases, each synthesized into speech from 36 distinct voice profiles spanning variations in age, gender, and emotion. Our findings reveal a severe modality bias: surgical recommendations for audio inputs varied by as much as 35% compared to identical text-based inputs, with one model providing 80% fewer recommendations. Further analysis uncovered age disparities of up to 12% between young and elderly voices, which persisted in most models despite chain-of-thought prompting. While explicit reasoning successfully eliminated gender bias, the impact of emotion was not detected due to poor recognition performance. These results demonstrate that audio LLMs are susceptible to making clinical decisions based on a patient's voice characteristics rather than medical evidence, a flaw that risks perpetuating healthcare disparities. We conclude that bias-aware architectures are essential and urgently needed before the clinical deployment of these models.
Similar Papers
When Audio and Text Disagree: Revealing Text Bias in Large Audio-Language Models
Computation and Language
AI ignores sounds when text disagrees.
Bias in Large Language Models Across Clinical Applications: A Systematic Review
Computation and Language
Fixes AI mistakes in doctor's notes for fairness.
Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM
Computation and Language
AI voices show some gender leanings.