On VLMs for Diverse Tasks in Multimodal Meme Classification
By: Deepesh Gavit , Debajyoti Mazumder , Samiran Das and more
Potential Business Impact:
Helps computers understand jokes and feelings in memes.
In this paper, we present a comprehensive and systematic analysis of vision-language models (VLMs) for disparate meme classification tasks. We introduced a novel approach that generates a VLM-based understanding of meme images and fine-tunes the LLMs on textual understanding of the embedded meme text for improving the performance. Our contributions are threefold: (1) Benchmarking VLMs with diverse prompting strategies purposely to each sub-task; (2) Evaluating LoRA fine-tuning across all VLM components to assess performance gains; and (3) Proposing a novel approach where detailed meme interpretations generated by VLMs are used to train smaller language models (LLMs), significantly improving classification. The strategy of combining VLMs with LLMs improved the baseline performance by 8.34%, 3.52% and 26.24% for sarcasm, offensive and sentiment classification, respectively. Our results reveal the strengths and limitations of VLMs and present a novel strategy for meme understanding.
Similar Papers
Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models
CV and Pattern Recognition
Changes mean memes into funny ones.
Evaluating Vision-Language Models for Emotion Recognition
CV and Pattern Recognition
Helps computers understand feelings in pictures.
Caption This, Reason That: VLMs Caught in the Middle
CV and Pattern Recognition
Helps computers understand pictures better by thinking.