Score: 1

Can Large Vision-Language Models Understand Multimodal Sarcasm?

Published: August 5, 2025 | arXiv ID: 2508.03654v1

By: Xinyu Wang, Yue Zhang, Liqiang Jing

Potential Business Impact:

Helps computers understand jokes and sarcasm better.

Sarcasm is a complex linguistic phenomenon that involves a disparity between literal and intended meanings, making it challenging for sentiment analysis and other emotion-sensitive tasks. While traditional sarcasm detection methods primarily focus on text, recent approaches have incorporated multimodal information. However, the application of Large Visual Language Models (LVLMs) in Multimodal Sarcasm Analysis (MSA) remains underexplored. In this paper, we evaluate LVLMs in MSA tasks, specifically focusing on Multimodal Sarcasm Detection and Multimodal Sarcasm Explanation. Through comprehensive experiments, we identify key limitations, such as insufficient visual understanding and a lack of conceptual knowledge. To address these issues, we propose a training-free framework that integrates in-depth object extraction and external conceptual knowledge to improve the model's ability to interpret and explain sarcasm in multimodal contexts. The experimental results on multiple models show the effectiveness of our proposed framework. The code is available at https://github.com/cp-cp/LVLM-MSA.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Computation and Language