Effectively Detecting and Responding to Online Harassment with Large Language Models
By: Pinxian Lu , Nimra Ishfaq , Emma Win and more
Potential Business Impact:
Helps stop online bullying in private chats.
Online harassment has been a persistent issue in the online space. Predominantly, research focused on online harassment in public social media platforms, while less is placed on private messaging platforms. To address online harassment on one private messaging platform, Instagram, we leverage the capabilities of Large Language Models (LLMs). To achieve this, we recruited human labelers to identify online harassment in an Instagram messages dataset. Using the previous conversation as context, we utilize an LLM pipeline to conduct large-scale labeling on Instagram messages and evaluate its performance against human labels. Then, we use LLM to generate and evaluate simulated responses to online harassment messages. We find that the LLM labeling pipeline is capable of identifying online harassment in private messages. By comparing human responses and simulated responses, we also demonstrate that our simulated responses are superior in helpfulness compared to original human responses.
Similar Papers
Promoting Security and Trust on Social Networks: Explainable Cyberbullying Detection Using Large Language Models in a Stream-Based Machine Learning Framework
Social and Information Networks
Finds online bullies fast to keep kids safe.
A Machine Learning Approach for Detection of Mental Health Conditions and Cyberbullying from Social Media
Computation and Language
Finds online bullying and sadness on social media.
A Machine Learning Approach for Detection of Mental Health Conditions and Cyberbullying from Social Media
Computation and Language
Finds online bullying and sadness to help people.