Towards Evaluating Proactive Risk Awareness of Multimodal Language Models
By: Youliang Yuan , Wenxiang Jiao , Yuejin Xie and more
Potential Business Impact:
AI spots dangers before they happen.
Human safety awareness gaps often prevent the timely recognition of everyday risks. In solving this problem, a proactive safety artificial intelligence (AI) system would work better than a reactive one. Instead of just reacting to users' questions, it would actively watch people's behavior and their environment to detect potential dangers in advance. Our Proactive Safety Bench (PaSBench) evaluates this capability through 416 multimodal scenarios (128 image sequences, 288 text logs) spanning 5 safety-critical domains. Evaluation of 36 advanced models reveals fundamental limitations: Top performers like Gemini-2.5-pro achieve 71% image and 64% text accuracy, but miss 45-55% risks in repeated trials. Through failure analysis, we identify unstable proactive reasoning rather than knowledge deficits as the primary limitation. This work establishes (1) a proactive safety benchmark, (2) systematic evidence of model limitations, and (3) critical directions for developing reliable protective AI. We believe our dataset and findings can promote the development of safer AI assistants that actively prevent harm rather than merely respond to requests. Our dataset can be found at https://huggingface.co/datasets/Youliang/PaSBench.
Similar Papers
ProGuard: Towards Proactive Multimodal Safeguard
CV and Pattern Recognition
Finds and explains new AI dangers before they happen.
Multimodal Safety Evaluation in Generative Agent Social Simulations
Artificial Intelligence
Tests if AI can be safe and trustworthy.
The PacifAIst Benchmark:Would an Artificial Intelligence Choose to Sacrifice Itself for Human Safety?
Artificial Intelligence
Tests AI to make sure it helps people, not itself.