Addressing Antisocial Behavior in Multi-Party Dialogs Through Multimodal Representation Learning
By: Hajar Bakarou, Mohamed Sinane El Messoussi, Anaïs Ollagnier
Potential Business Impact:
Finds online bullying in group chats.
Antisocial behavior (ASB) on social media -- including hate speech, harassment, and cyberbullying -- poses growing risks to platform safety and societal well-being. Prior research has focused largely on networks such as X and Reddit, while \textit{multi-party conversational settings} remain underexplored due to limited data. To address this gap, we use \textit{CyberAgressionAdo-Large}, a French open-access dataset simulating ASB in multi-party conversations, and evaluate three tasks: \textit{abuse detection}, \textit{bullying behavior analysis}, and \textit{bullying peer-group identification}. We benchmark six text-based and eight graph-based \textit{representation-learning methods}, analyzing lexical cues, interactional dynamics, and their multimodal fusion. Results show that multimodal models outperform unimodal baselines. The late fusion model \texttt{mBERT + WD-SGCN} achieves the best overall results, with top performance on abuse detection (0.718) and competitive scores on peer-group identification (0.286) and bullying analysis (0.606). Error analysis highlights its effectiveness in handling nuanced ASB phenomena such as implicit aggression, role transitions, and context-dependent hostility.
Similar Papers
A Machine Learning Approach for Detection of Mental Health Conditions and Cyberbullying from Social Media
Computation and Language
Finds online bullying and sadness on social media.
A Machine Learning Approach for Detection of Mental Health Conditions and Cyberbullying from Social Media
Computation and Language
Finds online bullying and sadness to help people.
Conversation-Based Multimodal Abuse Detection Through Text and Graph Embeddings
Social and Information Networks
Finds online bullies by studying messages and chats.