Analyzing Bias in False Refusal Behavior of Large Language Models for Hate Speech Detoxification
By: Kyuri Im, Shuzhou Yuan, Michael Färber
While large language models (LLMs) have increasingly been applied to hate speech detoxification, the prompts often trigger safety alerts, causing LLMs to refuse the task. In this study, we systematically investigate false refusal behavior in hate speech detoxification and analyze the contextual and linguistic biases that trigger such refusals. We evaluate nine LLMs on both English and multilingual datasets, our results show that LLMs disproportionately refuse inputs with higher semantic toxicity and those targeting specific groups, particularly nationality, religion, and political ideology. Although multilingual datasets exhibit lower overall false refusal rates than English datasets, models still display systematic, language-dependent biases toward certain targets. Based on these findings, we propose a simple cross-translation strategy, translating English hate speech into Chinese for detoxification and back, which substantially reduces false refusals while preserving the original content, providing an effective and lightweight mitigation approach.
Similar Papers
Characterizing Selective Refusal Bias in Large Language Models
Computation and Language
Fixes AI's unfair refusal to answer some questions.
Are LLMs Good Safety Agents or a Propaganda Engine?
Computation and Language
Tests if AI is safe or politically censored.
Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal
Computation and Language
Makes AI ignore safety rules to answer bad questions.