Evaluating Multimodal Large Language Models on Spoken Sarcasm Understanding
By: Zhu Li , Xiyuan Gao , Yuqing Zhang and more
Potential Business Impact:
Helps computers understand jokes by voice, text, and face.
Sarcasm detection remains a challenge in natural language understanding, as sarcastic intent often relies on subtle cross-modal cues spanning text, speech, and vision. While prior work has primarily focused on textual or visual-textual sarcasm, comprehensive audio-visual-textual sarcasm understanding remains underexplored. In this paper, we systematically evaluate large language models (LLMs) and multimodal LLMs for sarcasm detection on English (MUStARD++) and Chinese (MCSD 1.0) in zero-shot, few-shot, and LoRA fine-tuning settings. In addition to direct classification, we explore models as feature encoders, integrating their representations through a collaborative gating fusion module. Experimental results show that audio-based models achieve the strongest unimodal performance, while text-audio and audio-vision combinations outperform unimodal and trimodal models. Furthermore, MLLMs such as Qwen-Omni show competitive zero-shot and fine-tuned performance. Our findings highlight the potential of MLLMs for cross-lingual, audio-visual-textual sarcasm understanding.
Similar Papers
Can Large Vision-Language Models Understand Multimodal Sarcasm?
Computation and Language
Helps computers understand jokes and sarcasm better.
Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Computation and Language
Helps computers understand jokes and sarcasm better.
Evaluating Open-Source Vision-Language Models for Multimodal Sarcasm Detection
Machine Learning (CS)
Computers learn to spot sarcasm in pictures and words.