AEQ-Bench: Measuring Empathy of Omni-Modal Large Models
By: Xuan Luo , Lewei Yao , Libo Zhao and more
While the automatic evaluation of omni-modal large models (OLMs) is essential, assessing empathy remains a significant challenge due to its inherent affectivity. To investigate this challenge, we introduce AEQ-Bench (Audio Empathy Quotient Benchmark), a novel benchmark to systematically assess two core empathetic capabilities of OLMs: (i) generating empathetic responses by comprehending affective cues from multi-modal inputs (audio + text), and (ii) judging the empathy of audio responses without relying on text transcription. Compared to existing benchmarks, AEQ-Bench incorporates two novel settings that vary in context specificity and speech tone. Comprehensive assessment across linguistic and paralinguistic metrics reveals that (1) OLMs trained with audio output capabilities generally outperformed models with text-only outputs, and (2) while OLMs align with human judgments for coarse-grained quality assessment, they remain unreliable for evaluating fine-grained paralinguistic expressiveness.
Similar Papers
EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
Computation and Language
Helps robots understand and respond to feelings.
AV-EMO-Reasoning: Benchmarking Emotional Reasoning Capabilities in Omni-modal LLMS with Audio-visual Cues
Multimedia
AI understands feelings better from voices and faces.
Empathy Omni: Enabling Empathetic Speech Response Generation through Large Language Models
Computation and Language
Makes AI assistants understand and respond with feelings.