Instance-Aligned Captions for Explainable Video Anomaly Detection
By: Inpyo Song , Minjun Joo , Joonhyung Kwon and more
Explainable video anomaly detection (VAD) is crucial for safety-critical applications, yet even with recent progress, much of the research still lacks spatial grounding, making the explanations unverifiable. This limitation is especially pronounced in multi-entity interactions, where existing explainable VAD methods often produce incomplete or visually misaligned descriptions, reducing their trustworthiness. To address these challenges, we introduce instance-aligned captions that link each textual claim to specific object instances with appearance and motion attributes. Our framework captures who caused the anomaly, what each entity was doing, whom it affected, and where the explanationis grounded, enabling verifiable and actionable reasoning. We annotate eight widely used VAD benchmarks and extend the 360-degree egocentric dataset, VIEW360, with 868 additional videos, eight locations, and four new anomaly types, creating VIEW360+, a comprehensive testbed for explainable VAD. Experiments show that our instance-level spatially grounded captions reveal significant limitations in current LLM- and VLM-based methods while providing a robust benchmark for future research in trustworthy and interpretable anomaly detection.
Similar Papers
DUAL-VAD: Dual Benchmarks and Anomaly-Focused Sampling for Video Anomaly Detection
CV and Pattern Recognition
Finds weird things happening in videos.
GV-VAD : Exploring Video Generation for Weakly-Supervised Video Anomaly Detection
CV and Pattern Recognition
Spots strange events in videos automatically.
RefineVAD: Semantic-Guided Feature Recalibration for Weakly Supervised Video Anomaly Detection
CV and Pattern Recognition
Finds weird things in videos by watching motion and meaning.