Improving Implicit Hate Speech Detection via a Community-Driven Multi-Agent Framework
By: Ewelina Gajewska, Katarzyna Budzynska, Jarosław A Chudziak
This work proposes a contextualised detection framework for implicitly hateful speech, implemented as a multi-agent system comprising a central Moderator Agent and dynamically constructed Community Agents representing specific demographic groups. Our approach explicitly integrates socio-cultural context from publicly available knowledge sources, enabling identity-aware moderation that surpasses state-of-the-art prompting methods (zero-shot prompting, few-shot prompting, chain-of-thought prompting) and alternative approaches on a challenging ToxiGen dataset. We enhance the technical rigour of performance evaluation by incorporating balanced accuracy as a central metric of classification fairness that accounts for the trade-off between true positive and true negative rates. We demonstrate that our community-driven consultative framework significantly improves both classification accuracy and fairness across all target groups.
Similar Papers
See, Explain, and Intervene: A Few-Shot Multimodal Agent Framework for Hateful Meme Moderation
Computation and Language
Stops mean online pictures before they spread.
Evolving Hate Speech Online: An Adaptive Framework for Detection and Mitigation
Computation and Language
Stops online hate speech, even new words.
Leveraging LLMs for Context-Aware Implicit Textual and Multimodal Hate Speech Detection
Computation and Language
Helps computers spot hateful messages better.