Reasoning-Aware Multimodal Fusion for Hateful Video Detection
By: Shuonan Yang , Tailin Chen , Jiangbei Yue and more
Potential Business Impact:
Finds hate speech hidden in online videos.
Hate speech in online videos is posing an increasingly serious threat to digital platforms, especially as video content becomes increasingly multimodal and context-dependent. Existing methods often struggle to effectively fuse the complex semantic relationships between modalities and lack the ability to understand nuanced hateful content. To address these issues, we propose an innovative Reasoning-Aware Multimodal Fusion (RAMF) framework. To tackle the first challenge, we design Local-Global Context Fusion (LGCF) to capture both local salient cues and global temporal structures, and propose Semantic Cross Attention (SCA) to enable fine-grained multimodal semantic interaction. To tackle the second challenge, we introduce adversarial reasoning-a structured three-stage process where a vision-language model generates (i) objective descriptions, (ii) hate-assumed inferences, and (iii) non-hate-assumed inferences-providing complementary semantic perspectives that enrich the model's contextual understanding of nuanced hateful intent. Evaluations on two real-world hateful video datasets demonstrate that our method achieves robust generalisation performance, improving upon state-of-the-art methods by 3% and 7% in Macro-F1 and hate class recall, respectively. We will release the code after the anonymity period ends.
Similar Papers
MultiHateLoc: Towards Temporal Localisation of Multimodal Hate Content in Online Videos
CV and Pattern Recognition
Finds hate speech hidden in videos.
Multimodal Hate Detection Using Dual-Stream Graph Neural Networks
CV and Pattern Recognition
Finds hate in videos by focusing on bad parts.
Enhanced Multimodal Hate Video Detection via Channel-wise and Modality-wise Fusion
Multimedia
Finds hate videos hidden in text, sound, and pictures.