Score: 4

SpeechIQ: Speech-Agentic Intelligence Quotient Across Cognitive Levels in Voice Understanding by Large Language Models

Published: July 25, 2025 | arXiv ID: 2507.19361v2

By: Zhen Wan , Chao-Han Huck Yang , Yahan Yu and more

BigTech Affiliations: NVIDIA

Potential Business Impact:

Tests how well computers understand spoken words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce Speech-based Intelligence Quotient (SIQ) as a new form of human cognition-inspired evaluation pipeline for voice understanding large language models, LLM Voice, designed to assess their voice understanding ability. Moving beyond popular voice understanding metrics such as word error rate (WER), SIQ examines LLM Voice across three cognitive levels motivated by Bloom's Taxonomy: (1) Remembering (i.e., WER for verbatim accuracy); (2) Understanding (i.e., similarity of LLM's interpretations); and (3) Application (i.e., QA accuracy for simulating downstream tasks). We demonstrate that SIQ not only quantifies voice understanding abilities but also provides unified comparisons between cascaded methods (e.g., ASR LLM) and end-to-end models, identifies annotation errors in existing benchmarks, and detects hallucinations in LLM Voice. Our framework represents a first-of-its-kind intelligence examination that bridges cognitive principles with voice-oriented benchmarks, while exposing overlooked challenges in multi-modal training. Our code and data will be open source to encourage future studies.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡―πŸ‡΅ United States, Japan


Page Count
18 pages

Category
Computer Science:
Computation and Language