Score: 0

Evaluating Multimodal Large Language Models on Spoken Sarcasm Understanding

Published: September 18, 2025 | arXiv ID: 2509.15476v1

By: Zhu Li , Xiyuan Gao , Yuqing Zhang and more

Potential Business Impact:

Helps computers understand jokes by voice, text, and face.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Sarcasm detection remains a challenge in natural language understanding, as sarcastic intent often relies on subtle cross-modal cues spanning text, speech, and vision. While prior work has primarily focused on textual or visual-textual sarcasm, comprehensive audio-visual-textual sarcasm understanding remains underexplored. In this paper, we systematically evaluate large language models (LLMs) and multimodal LLMs for sarcasm detection on English (MUStARD++) and Chinese (MCSD 1.0) in zero-shot, few-shot, and LoRA fine-tuning settings. In addition to direct classification, we explore models as feature encoders, integrating their representations through a collaborative gating fusion module. Experimental results show that audio-based models achieve the strongest unimodal performance, while text-audio and audio-vision combinations outperform unimodal and trimodal models. Furthermore, MLLMs such as Qwen-Omni show competitive zero-shot and fine-tuned performance. Our findings highlight the potential of MLLMs for cross-lingual, audio-visual-textual sarcasm understanding.

Page Count
5 pages

Category
Computer Science:
Computation and Language