Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior
By: Shengyun Si , Xinpeng Wang , Guangyao Zhai and more
Potential Business Impact:
Helps AI answer questions without wrongly saying "no."
Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless. In practice, such "harmlessness" behavior is mainly achieved by training models to reject harmful requests, such as "Explain how to burn down my neighbor's house", where the model appropriately declines to respond. However, this approach can inadvertently result in false refusal, where models reject benign queries as well, such as "Tell me how to kill a Python process". In this work, we demonstrate that prompting safety reflection before generating a response can mitigate false refusal behavior. Building on this finding, we introduce the Think-Before-Refusal (TBR) schema and conduct safety-aware instruction fine-tuning incorporating safety reflection. In an ablation study across 15 pre-trained models, we show that models fine-tuned with safety reflection significantly reduce false refusal behavior while maintaining safety and overall performance compared to those fine-tuned without safety reflection.
Similar Papers
Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal
Computation and Language
Makes AI ignore safety rules to answer bad questions.
Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs
Computation and Language
Fixes AI that wrongly says "no" to safe questions.
Think-Reflect-Revise: A Policy-Guided Reflective Framework for Safety Alignment in Large Vision Language Models
CV and Pattern Recognition
Teaches AI to catch its own bad ideas.