SpeakerSleuth: Evaluating Large Audio-Language Models as Judges for Multi-turn Speaker Consistency
By: Jonggeun Lee , Junseong Pyo , Gyuhyeon Seo and more
Potential Business Impact:
Helps AI tell if a speaker's voice changes in talks.
Large Audio-Language Models (LALMs) as judges have emerged as a prominent approach for evaluating speech generation quality, yet their ability to assess speaker consistency across multi-turn conversations remains unexplored. We present SpeakerSleuth, a benchmark evaluating whether LALMs can reliably judge speaker consistency in multi-turn dialogues through three tasks reflecting real-world requirements. We construct 1,818 human-verified evaluation instances across four diverse datasets spanning synthetic and real speech, with controlled acoustic difficulty. Evaluating nine widely-used LALMs, we find that models struggle to reliably detect acoustic inconsistencies. For instance, given audio samples of the same speaker's turns, some models overpredict inconsistency, whereas others are overly lenient. Models further struggle to identify the exact turns that are problematic. When other interlocutors' turns are provided together, performance degrades dramatically as models prioritize textual coherence over acoustic cues, failing to detect even obvious gender switches for a speaker. On the other hand, models perform substantially better in choosing the audio that best matches the speaker among several acoustic variants, demonstrating inherent acoustic discrimination capabilities. These findings expose a significant bias in LALMs: they tend to prioritize text over acoustics, revealing fundamental modality imbalances that need to be addressed to build reliable audio-language judges.
Similar Papers
AudioJudge: Understanding What Works in Large Audio Model Based Speech Evaluation
Computation and Language
Lets computers judge speech quality like people.
Audio-Aware Large Language Models as Judges for Speaking Styles
Audio and Speech Processing
AI judges speaking styles better than people.
Learning an Efficient Multi-Turn Dialogue Evaluator from Multiple Judges
Computation and Language
Grades AI chats fast using one smart judge