Score: 0

Effectively Detecting and Responding to Online Harassment with Large Language Models

Published: November 28, 2025 | arXiv ID: 2512.14700v1

By: Pinxian Lu , Nimra Ishfaq , Emma Win and more

Potential Business Impact:

Helps stop online bullying in private chats.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Online harassment has been a persistent issue in the online space. Predominantly, research focused on online harassment in public social media platforms, while less is placed on private messaging platforms. To address online harassment on one private messaging platform, Instagram, we leverage the capabilities of Large Language Models (LLMs). To achieve this, we recruited human labelers to identify online harassment in an Instagram messages dataset. Using the previous conversation as context, we utilize an LLM pipeline to conduct large-scale labeling on Instagram messages and evaluate its performance against human labels. Then, we use LLM to generate and evaluate simulated responses to online harassment messages. We find that the LLM labeling pipeline is capable of identifying online harassment in private messages. By comparing human responses and simulated responses, we also demonstrate that our simulated responses are superior in helpfulness compared to original human responses.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Social and Information Networks