Score: 1

Evaluating Open-Source Vision-Language Models for Multimodal Sarcasm Detection

Published: October 13, 2025 | arXiv ID: 2510.11852v1

By: Saroj Basnet , Shafkat Farabi , Tharindu Ranasinghe and more

Potential Business Impact:

Computers learn to spot sarcasm in pictures and words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in open-source vision-language models (VLMs) offer new opportunities for understanding complex and subjective multimodal phenomena such as sarcasm. In this work, we evaluate seven state-of-the-art VLMs - BLIP2, InstructBLIP, OpenFlamingo, LLaVA, PaliGemma, Gemma3, and Qwen-VL - on their ability to detect multimodal sarcasm using zero-, one-, and few-shot prompting. Furthermore, we evaluate the models' capabilities in generating explanations to sarcastic instances. We evaluate the capabilities of VLMs on three benchmark sarcasm datasets (Muse, MMSD2.0, and SarcNet). Our primary objectives are twofold: (1) to quantify each model's performance in detecting sarcastic image-caption pairs, and (2) to assess their ability to generate human-quality explanations that highlight the visual-textual incongruities driving sarcasm. Our results indicate that, while current models achieve moderate success in binary sarcasm detection, they are still not able to generate high-quality explanations without task-specific finetuning.

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)