Evaluating Open-Source Vision-Language Models for Multimodal Sarcasm Detection
By: Saroj Basnet , Shafkat Farabi , Tharindu Ranasinghe and more
Potential Business Impact:
Computers learn to spot sarcasm in pictures and words.
Recent advances in open-source vision-language models (VLMs) offer new opportunities for understanding complex and subjective multimodal phenomena such as sarcasm. In this work, we evaluate seven state-of-the-art VLMs - BLIP2, InstructBLIP, OpenFlamingo, LLaVA, PaliGemma, Gemma3, and Qwen-VL - on their ability to detect multimodal sarcasm using zero-, one-, and few-shot prompting. Furthermore, we evaluate the models' capabilities in generating explanations to sarcastic instances. We evaluate the capabilities of VLMs on three benchmark sarcasm datasets (Muse, MMSD2.0, and SarcNet). Our primary objectives are twofold: (1) to quantify each model's performance in detecting sarcastic image-caption pairs, and (2) to assess their ability to generate human-quality explanations that highlight the visual-textual incongruities driving sarcasm. Our results indicate that, while current models achieve moderate success in binary sarcasm detection, they are still not able to generate high-quality explanations without task-specific finetuning.
Similar Papers
Can Large Vision-Language Models Understand Multimodal Sarcasm?
Computation and Language
Helps computers understand jokes and sarcasm better.
Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Computation and Language
Helps computers understand jokes and sarcasm better.
Evaluating Multimodal Large Language Models on Spoken Sarcasm Understanding
Computation and Language
Helps computers understand jokes by voice, text, and face.