QualiSpeech: A Speech Quality Assessment Dataset with Natural Language Reasoning and Descriptions
By: Siyin Wang , Wenyi Yu , Xianzhao Chen and more
Potential Business Impact:
Helps computers describe sound problems in detail.
This paper explores a novel perspective to speech quality assessment by leveraging natural language descriptions, offering richer, more nuanced insights than traditional numerical scoring methods. Natural language feedback provides instructive recommendations and detailed evaluations, yet existing datasets lack the comprehensive annotations needed for this approach. To bridge this gap, we introduce QualiSpeech, a comprehensive low-level speech quality assessment dataset encompassing 11 key aspects and detailed natural language comments that include reasoning and contextual insights. Additionally, we propose the QualiSpeech Benchmark to evaluate the low-level speech understanding capabilities of auditory large language models (LLMs). Experimental results demonstrate that finetuned auditory LLMs can reliably generate detailed descriptions of noise and distortion, effectively identifying their types and temporal characteristics. The results further highlight the potential for incorporating reasoning to enhance the accuracy and reliability of quality assessments. The dataset will be released at https://huggingface.co/datasets/tsinghua-ee/QualiSpeech.
Similar Papers
SpeechLLM-as-Judges: Towards General and Interpretable Speech Quality Evaluation
Sound
Helps computers judge how real or good fake voices sound.
SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs
Computation and Language
Tests how well computers understand spoken questions.
Benchmarking Contextual and Paralinguistic Reasoning in Speech-LLMs: A Case Study with In-the-Wild Data
Computation and Language
Helps computers understand feelings in voices.