Score: 0

Characterizing Selective Refusal Bias in Large Language Models

Published: October 31, 2025 | arXiv ID: 2510.27087v1

By: Adel Khorramrouz, Sharon Levy

Potential Business Impact:

Fixes AI's unfair refusal to answer some questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Safety guardrails in large language models(LLMs) are developed to prevent malicious users from generating toxic content at a large scale. However, these measures can inadvertently introduce or reflect new biases, as LLMs may refuse to generate harmful content targeting some demographic groups and not others. We explore this selective refusal bias in LLM guardrails through the lens of refusal rates of targeted individual and intersectional demographic groups, types of LLM responses, and length of generated refusals. Our results show evidence of selective refusal bias across gender, sexual orientation, nationality, and religion attributes. This leads us to investigate additional safety implications via an indirect attack, where we target previously refused groups. Our findings emphasize the need for more equitable and robust performance in safety guardrails across demographic groups.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Computation and Language