Score: 0

AudioJudge: Understanding What Works in Large Audio Model Based Speech Evaluation

Published: July 17, 2025 | arXiv ID: 2507.12705v1

By: Potsawee Manakul , Woody Haosheng Gan , Michael J. Ryan and more

Potential Business Impact:

Lets computers judge speech quality like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current speech evaluation suffers from two critical limitations: the need and difficulty of designing specialized systems targeting individual audio characteristics, and poor correlation between automatic evaluation methods and human preferences. This work presents a systematic study of Large Audio Model (LAM) as a Judge, AudioJudge, investigating whether it can provide a unified evaluation framework that addresses both challenges. We systematically explore AudioJudge across audio characteristic detection tasks, including pronunciation, speaking rate, speaker identification and speech quality, and system-level human preference simulation for automated benchmarking. We investigate different prompt engineering strategies, finding that audio concatenation combined with in-context learning significantly improves performance across both audio characteristic detection and human preference simulation tasks. We further introduce a multi-aspect ensemble AudioJudge to enable general-purpose multi-aspect audio evaluation. This method decomposes speech assessment into specialized judges for lexical content, speech quality, and paralinguistic features, achieving up to 0.91 Spearman correlation with human preferences on our system ranking benchmark. Robustness analysis reveals that while LAMs maintain strong performance under acoustic noise, they exhibit significant verbosity and positional biases that require careful mitigation.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Computation and Language