Characterizing Selective Refusal Bias in Large Language Models
By: Adel Khorramrouz, Sharon Levy
Potential Business Impact:
Fixes AI's unfair refusal to answer some questions.
Safety guardrails in large language models(LLMs) are developed to prevent malicious users from generating toxic content at a large scale. However, these measures can inadvertently introduce or reflect new biases, as LLMs may refuse to generate harmful content targeting some demographic groups and not others. We explore this selective refusal bias in LLM guardrails through the lens of refusal rates of targeted individual and intersectional demographic groups, types of LLM responses, and length of generated refusals. Our results show evidence of selective refusal bias across gender, sexual orientation, nationality, and religion attributes. This leads us to investigate additional safety implications via an indirect attack, where we target previously refused groups. Our findings emphasize the need for more equitable and robust performance in safety guardrails across demographic groups.
Similar Papers
Silenced Biases: The Dark Side LLMs Learned to Refuse
Computation and Language
Finds hidden unfairness in AI that safety features hide.
Silenced Biases: The Dark Side LLMs Learned to Refuse
Computation and Language
Finds hidden unfairness in AI that safety rules hide.
Are LLMs Good Safety Agents or a Propaganda Engine?
Computation and Language
Tests if AI is safe or politically censored.