See, Explain, and Intervene: A Few-Shot Multimodal Agent Framework for Hateful Meme Moderation
By: Naquee Rizwan , Subhankar Swain , Paramananda Bhaskar and more
Potential Business Impact:
Stops mean online pictures before they spread.
In this work, we examine hateful memes from three complementary angles - how to detect them, how to explain their content and how to intervene them prior to being posted - by applying a range of strategies built on top of generative AI models. To the best of our knowledge, explanation and intervention have typically been studied separately from detection, which does not reflect real-world conditions. Further, since curating large annotated datasets for meme moderation is prohibitively expensive, we propose a novel framework that leverages task-specific generative multimodal agents and the few-shot adaptability of large multimodal models to cater to different types of memes. We believe this is the first work focused on generalizable hateful meme moderation under limited data conditions, and has strong potential for deployment in real-world production scenarios. Warning: Contains potentially toxic contents.
Similar Papers
MemeIntel: Explainable Detection of Propagandistic and Hateful Memes
Computation and Language
Helps computers spot fake news in pictures.
Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions
Computation and Language
Helps computers explain why memes are hateful.
Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models
CV and Pattern Recognition
Changes mean memes into funny ones.