RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language Models
By: Aashiq Muhamed , Leonardo F. R. Ribeiro , Markus Dreyer and more
Potential Business Impact:
Makes AI know when it's wrong.
The ability of language models in RAG systems to selectively refuse to answer based on flawed context is critical for safety, yet remains a significant failure point. Our large-scale study reveals that even frontier models struggle in this setting, with refusal accuracy dropping below 50% on multi-document tasks, while exhibiting either dangerous overconfidence or overcaution. Static benchmarks fail to reliably evaluate this capability, as models exploit dataset-specific artifacts and memorize test instances. We introduce RefusalBench, a generative methodology that programmatically creates diagnostic test cases through controlled linguistic perturbation. Our framework employs 176 distinct perturbation strategies across six categories of informational uncertainty and three intensity levels. Evaluation of over 30 models uncovers systematic failure patterns: refusal comprises separable detection and categorization skills, and neither scale nor extended reasoning improves performance. We find that selective refusal is a trainable, alignment-sensitive capability, offering a clear path for improvement. We release two benchmarks -- RefusalBench-NQ (single document) and RefusalBench-GaRAGe (multi-document) -- and our complete generation framework to enable continued, dynamic evaluation of this critical capability.
Similar Papers
Steering Over-refusals Towards Safety in Retrieval Augmented Generation
Computation and Language
Helps AI understand safe questions better.
Characterizing Selective Refusal Bias in Large Language Models
Computation and Language
Fixes AI's unfair refusal to answer some questions.
DUAL-Bench: Measuring Over-Refusal and Robustness in Vision-Language Models
Computation and Language
Helps AI understand when to answer and when to warn.