DUAL-Bench: Measuring Over-Refusal and Robustness in Vision-Language Models
By: Kaixuan Ren, Preslav Nakov, Usman Naseem
Potential Business Impact:
Helps AI understand when to answer and when to warn.
As vision-language models become increasingly capable, maintaining a balance between safety and usefulness remains a central challenge. Safety mechanisms, while essential, can backfire, causing over-refusal, where models decline benign requests out of excessive caution. Yet, no existing benchmark has systematically addressed over-refusal in the visual modality. This setting introduces unique challenges, such as dual-use cases where an instruction is harmless, but the accompanying image contains harmful content. Models frequently fail in such scenarios, either refusing too conservatively or completing tasks unsafely, which highlights the need for more fine-grained alignment. The ideal behavior is safe completion, i.e., fulfilling the benign parts of a request while explicitly warning about any potentially harmful elements. To address this, we present DUAL-Bench, the first multimodal benchmark focused on over-refusal and safe completion in VLMs. We evaluated 18 VLMs across 12 hazard categories, with focus on their robustness under semantics-preserving visual perturbations. The results reveal substantial room for improvement: GPT-5-Nano achieves 12.9% safe completion, GPT-5 models average 7.9%, and Qwen models only 3.9%. We hope that DUAL-Bench will foster the development of more nuanced alignment strategies that ensure models remain both safe and useful in complex multimodal settings.
Similar Papers
VLSU: Mapping the Limits of Joint Multimodal Understanding for AI Safety
CV and Pattern Recognition
Finds when pictures and words together make AI unsafe.
RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language Models
Computation and Language
Makes AI know when it's wrong.
Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs
Computation and Language
Fixes AI that wrongly says "no" to safe questions.