RiskCueBench: Benchmarking Anticipatory Reasoning from Early Risk Cues in Video-Language Models
By: Sha Luo , Yogesh Prabhu , Tim Ossowski and more
Potential Business Impact:
Spots danger in videos before it happens.
With the rapid growth of video centered social media, the ability to anticipate risky events from visual data is a promising direction for ensuring public safety and preventing real world accidents. Prior work has extensively studied supervised video risk assessment across domains such as driving, protests, and natural disasters. However, many existing datasets provide models with access to the full video sequence, including the accident itself, which substantially reduces the difficulty of the task. To better reflect real world conditions, we introduce a new video understanding benchmark RiskCueBench in which videos are carefully annotated to identify a risk signal clip, defined as the earliest moment that indicates a potential safety concern. Experimental results reveal a significant gap in current systems ability to interpret evolving situations and anticipate future risky events from early visual signals, highlighting important challenges for deploying video risk prediction models in practice.
Similar Papers
Seeing before Observable: Potential Risk Reasoning in Autonomous Driving via Vision Language Models
Robotics
Helps self-driving cars see danger before it happens.
ConceptGuard: Proactive Safety in Text-and-Image-to-Video Generation through Multimodal Risk Detection
CV and Pattern Recognition
Stops AI from making bad videos from pictures and words.
iSafetyBench: A video-language benchmark for safety in industrial environment
CV and Pattern Recognition
Tests AI for factory safety and jobs.