Score: 1

Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM

Published: August 19, 2025 | arXiv ID: 2508.13603v1

By: Dariia Puhach, Amir H. Payberah, Éva Székely

Potential Business Impact:

AI voices show some gender leanings.

Business Areas:
Speech Recognition Data and Analytics, Software

Similar to text-based Large Language Models (LLMs), Speech-LLMs exhibit emergent abilities and context awareness. However, whether these similarities extend to gender bias remains an open question. This study proposes a methodology leveraging speaker assignment as an analytic tool for bias investigation. Unlike text-based models, which encode gendered associations implicitly, Speech-LLMs must produce a gendered voice, making speaker selection an explicit bias cue. We evaluate Bark, a Text-to-Speech (TTS) model, analyzing its default speaker assignments for textual prompts. If Bark's speaker selection systematically aligns with gendered associations, it may reveal patterns in its training data or model design. To test this, we construct two datasets: (i) Professions, containing gender-stereotyped occupations, and (ii) Gender-Colored Words, featuring gendered connotations. While Bark does not exhibit systematic bias, it demonstrates gender awareness and has some gender inclinations.

Country of Origin
🇸🇪 Sweden


Page Count
5 pages

Category
Computer Science:
Computation and Language