Speech-DRAME: A Framework for Human-Aligned Benchmarks in Speech Role-Play
By: Jiatong Shi , Jionghao Han , Yichen Lu and more
Potential Business Impact:
Makes AI better at acting like people in conversations.
Role-play has become a key testbed for generative models, expanding from text-only dialogue to multimodal interaction. Extending role-play to speech captures prosody, emotion, and delivery, but also poses new evaluation challenges. Current pipelines often use audio large language models (ALLMs) as zero-shot judges, which miss paralinguistic cues, collapse multiple aspects into coarse scores, and rely on synthetic speech references that fail to reflect real-world roles. We present Speech-DRAME, a unified framework that contributes at three levels: (i) Speech-DRAME-EvalBench, an evaluation benchmark with bilingual human-annotated data and protocols for training and testing speech evaluation models (SEMs), (ii) DRAME-Eval, a fine-tuned evaluation model, which substantially outperforms zero-shot and few-shot ALLMs, and (iii) Speech-DRAME-RoleBench, a speech role-play benchmark that leverages DRAME-Eval as an automatic judge to compare speech foundation models (SFMs). Speech-DRAME distinguishes between two complementary evaluation strategies: Archetype Evaluation, a top-down approach measuring adherence to broad role archetypes, and Realism Evaluation, a bottom-up approach grounded in real human speech that emphasizes nuanced role quality. Compared to zero-shot ALLM judges, DRAME-Eval achieves stronger agreement with human ratings (Pearson correlation from 0.480 to 0.629 in archetypes, and 0.390 to 0.625 in realism). By integrating transparent benchmark resources, modeling approaches, and system-level evaluation, Speech-DRAME provides the first comprehensive, reproducible foundation for assessing spoken role-play.
Similar Papers
SpeechRole: A Large-Scale Dataset and Benchmark for Evaluating Speech Role-Playing Agents
Computation and Language
Makes AI talk like different people for better chats.
SpeechRole: A Large-Scale Dataset and Benchmark for Evaluating Speech Role-Playing Agents
Computation and Language
Makes talking robots sound like real people.
SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models
Computation and Language
Tests if computers understand spoken words like humans do.