Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM
By: Dariia Puhach, Amir H. Payberah, Éva Székely
Potential Business Impact:
AI voices show some gender leanings.
Similar to text-based Large Language Models (LLMs), Speech-LLMs exhibit emergent abilities and context awareness. However, whether these similarities extend to gender bias remains an open question. This study proposes a methodology leveraging speaker assignment as an analytic tool for bias investigation. Unlike text-based models, which encode gendered associations implicitly, Speech-LLMs must produce a gendered voice, making speaker selection an explicit bias cue. We evaluate Bark, a Text-to-Speech (TTS) model, analyzing its default speaker assignments for textual prompts. If Bark's speaker selection systematically aligns with gendered associations, it may reveal patterns in its training data or model design. To test this, we construct two datasets: (i) Professions, containing gender-stereotyped occupations, and (ii) Gender-Colored Words, featuring gendered connotations. While Bark does not exhibit systematic bias, it demonstrates gender awareness and has some gender inclinations.
Similar Papers
Voice, Bias, and Coreference: An Interpretability Study of Gender in Speech Translation
Computation and Language
Translates speech, guessing gender from sound, not just pitch.
When Voice Matters: Evidence of Gender Disparity in Positional Bias of SpeechLLMs
Audio and Speech Processing
Finds bias in AI voices, especially female ones.
MedVoiceBias: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making
Computation and Language
Voice changes how computers give medical advice.