When Voice Matters: Evidence of Gender Disparity in Positional Bias of SpeechLLMs
By: Shree Harsha Bokkahalli Satish, Gustav Eje Henter, Éva Székely
Potential Business Impact:
Finds bias in AI voices, especially female ones.
The rapid development of SpeechLLM-based conversational AI systems has created a need for robustly benchmarking these efforts, including aspects of fairness and bias. At present, such benchmarks typically rely on multiple choice question answering (MCQA). In this paper, we present the first token-level probabilistic evaluation and response-based study of several issues affecting the use of MCQA in SpeechLLM benchmarking: 1) we examine how model temperature and prompt design affect gender and positional bias on an MCQA gender-bias benchmark; 2) we examine how these biases are affected by the gender of the input voice; and 3) we study to what extent observed trends carry over to a second gender-bias benchmark. Our results show that concerns about positional bias from the text domain are equally valid in the speech domain. We also find the effect to be stronger for female voices than for male voices. To our knowledge, this is the first study to isolate positional bias effects in SpeechLLM-based gender-bias benchmarks. We conclude that current MCQA benchmarks do not account for speech-based bias and alternative strategies are needed to ensure fairness towards all users.
Similar Papers
Benchmarking Educational LLMs with Analytics: A Case Study on Gender Bias in Feedback
Computation and Language
Finds unfairness in AI teacher feedback.
Adaptive Generation of Bias-Eliciting Questions for LLMs
Computers and Society
Finds unfairness in AI answers to real questions.
BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses
Computation and Language
Tests AI to make sure its answers are fair.