Score: 0

On VLMs for Diverse Tasks in Multimodal Meme Classification

Published: May 27, 2025 | arXiv ID: 2505.20937v1

By: Deepesh Gavit , Debajyoti Mazumder , Samiran Das and more

Potential Business Impact:

Helps computers understand jokes and feelings in memes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this paper, we present a comprehensive and systematic analysis of vision-language models (VLMs) for disparate meme classification tasks. We introduced a novel approach that generates a VLM-based understanding of meme images and fine-tunes the LLMs on textual understanding of the embedded meme text for improving the performance. Our contributions are threefold: (1) Benchmarking VLMs with diverse prompting strategies purposely to each sub-task; (2) Evaluating LoRA fine-tuning across all VLM components to assess performance gains; and (3) Proposing a novel approach where detailed meme interpretations generated by VLMs are used to train smaller language models (LLMs), significantly improving classification. The strategy of combining VLMs with LLMs improved the baseline performance by 8.34%, 3.52% and 26.24% for sarcasm, offensive and sentiment classification, respectively. Our results reveal the strengths and limitations of VLMs and present a novel strategy for meme understanding.

Page Count
16 pages

Category
Computer Science:
Computation and Language