Score: 1

Evaluating Online Moderation Via LLM-Powered Counterfactual Simulations

Published: November 10, 2025 | arXiv ID: 2511.07204v1

By: Giacomo Fidone, Lucia Passaro, Riccardo Guidotti

Potential Business Impact:

Tests how to stop online meanness better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Online Social Networks (OSNs) widely adopt content moderation to mitigate the spread of abusive and toxic discourse. Nonetheless, the real effectiveness of moderation interventions remains unclear due to the high cost of data collection and limited experimental control. The latest developments in Natural Language Processing pave the way for a new evaluation approach. Large Language Models (LLMs) can be successfully leveraged to enhance Agent-Based Modeling and simulate human-like social behavior with unprecedented degree of believability. Yet, existing tools do not support simulation-based evaluation of moderation strategies. We fill this gap by designing a LLM-powered simulator of OSN conversations enabling a parallel, counterfactual simulation where toxic behavior is influenced by moderation interventions, keeping all else equal. We conduct extensive experiments, unveiling the psychological realism of OSN agents, the emergence of social contagion phenomena and the superior effectiveness of personalized moderation strategies.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence