Evaluating Moderation in Online Social Network
By: Letizia Milli, Laura Pollacci, Riccardo Guidotti
The spread of toxic content on online platforms presents complex challenges that call for both theoretical insight and practical tools to test intervention strategies. In this novel research paper, we introduce a simulation-based framework that extends the classical SEIZ (Susceptible-Exposed-Infected-Skeptic) epidemic model to capture the dynamics of toxic message propagation. Our simulator incorporates active moderation mechanisms through two distinct variants: a basic moderator, which implements uniform, non-personalized interventions, and smart moderator, which leverages user-specific psychological profiles based on Dark Triad traits to apply personalized, threshold-driven moderation. By varying parameter configurations, the simulator allows for systematic exploration of how different moderation strategies influence user state transitions over time. Simulation results demonstrate that while generic interventions can curb toxicity under certain conditions, profile-aware moderation proves significantly more effective in limiting both the spread and persistence of toxic behavior. This simulation framework offers a flexible and extensible tool for studying and designing adaptive moderation strategies in complex online social systems.
Similar Papers
Modelling the Spread of Toxicity and Exploring its Mitigation on Online Social Networks
Social and Information Networks
Bots reduce online hate speech by changing its message.
Evaluating Online Moderation Via LLM-Powered Counterfactual Simulations
Artificial Intelligence
Tests how to stop online meanness better.
From Toxicity to Conformity: Adaptive user behavior to social norms in Telegram communities
Social and Information Networks
Online chats change how mean people act.