Score: 3

Towards Evaluating Proactive Risk Awareness of Multimodal Language Models

Published: May 23, 2025 | arXiv ID: 2505.17455v1

By: Youliang Yuan , Wenxiang Jiao , Yuejin Xie and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

AI spots dangers before they happen.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Human safety awareness gaps often prevent the timely recognition of everyday risks. In solving this problem, a proactive safety artificial intelligence (AI) system would work better than a reactive one. Instead of just reacting to users' questions, it would actively watch people's behavior and their environment to detect potential dangers in advance. Our Proactive Safety Bench (PaSBench) evaluates this capability through 416 multimodal scenarios (128 image sequences, 288 text logs) spanning 5 safety-critical domains. Evaluation of 36 advanced models reveals fundamental limitations: Top performers like Gemini-2.5-pro achieve 71% image and 64% text accuracy, but miss 45-55% risks in repeated trials. Through failure analysis, we identify unstable proactive reasoning rather than knowledge deficits as the primary limitation. This work establishes (1) a proactive safety benchmark, (2) systematic evidence of model limitations, and (3) critical directions for developing reliable protective AI. We believe our dataset and findings can promote the development of safer AI assistants that actively prevent harm rather than merely respond to requests. Our dataset can be found at https://huggingface.co/datasets/Youliang/PaSBench.

Country of Origin
πŸ‡­πŸ‡° πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ Hong Kong, United States, China

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language