Score: 0

SpeechLLM-as-Judges: Towards General and Interpretable Speech Quality Evaluation

Published: October 16, 2025 | arXiv ID: 2510.14664v1

By: Hui Wang , Jinghua Zhao , Yifan Yang and more

Potential Business Impact:

Helps computers judge how real or good fake voices sound.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Generative speech technologies are progressing rapidly, but evaluating the perceptual quality of synthetic speech remains a core challenge. Existing methods typically rely on scalar scores or binary decisions, which lack interpretability and generalization across tasks and languages. We present SpeechLLM-as-Judges, a new paradigm for enabling large language models (LLMs) to conduct structured and explanation-based speech quality evaluation. To support this direction, we introduce SpeechEval, a large-scale dataset containing 32,207 multilingual speech clips and 128,754 annotations spanning four tasks: quality assessment, pairwise comparison, improvement suggestion, and deepfake detection. Based on this resource, we develop SQ-LLM, a speech-quality-aware LLM trained with chain-of-thought reasoning and reward optimization to improve capability. Experimental results show that SQ-LLM delivers strong performance across tasks and languages, revealing the potential of this paradigm for advancing speech quality evaluation. Relevant resources will be open-sourced.

Page Count
26 pages

Category
Computer Science:
Sound