LLM in the Loop: Creating the ParaDeHate Dataset for Hate Speech Detoxification
By: Shuzhou Yuan , Ercong Nie , Lukas Kouba and more
Potential Business Impact:
Cleans up mean online words automatically.
Detoxification, the task of rewriting harmful language into non-toxic text, has become increasingly important amid the growing prevalence of toxic content online. However, high-quality parallel datasets for detoxification, especially for hate speech, remain scarce due to the cost and sensitivity of human annotation. In this paper, we propose a novel LLM-in-the-loop pipeline leveraging GPT-4o-mini for automated detoxification. We first replicate the ParaDetox pipeline by replacing human annotators with an LLM and show that the LLM performs comparably to human annotation. Building on this, we construct ParaDeHate, a large-scale parallel dataset specifically for hatespeech detoxification. We release ParaDeHate as a benchmark of over 8K hate/non-hate text pairs and evaluate a wide range of baseline methods. Experimental results show that models such as BART, fine-tuned on ParaDeHate, achieve better performance in style accuracy, content preservation, and fluency, demonstrating the effectiveness of LLM-generated detoxification text as a scalable alternative to human annotation.
Similar Papers
<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs
Computation and Language
Computers can't learn to remove hate speech well.
LLM-based Semantic Augmentation for Harmful Content Detection
Computation and Language
Cleans internet text to fight bad posts.
Rethinking Hate Speech Detection on Social Media: Can LLMs Replace Traditional Models?
Computation and Language
Helps computers spot online hate speech better.