Aligning Effective Tokens with Video Anomaly in Large Language Models
By: Yingxian Chen , Jiahui Liu , Ruifan Di and more
Potential Business Impact:
Finds strange things happening in videos.
Understanding abnormal events in videos is a vital and challenging task that has garnered significant attention in a wide range of applications. Although current video understanding Multi-modal Large Language Models (MLLMs) are capable of analyzing general videos, they often struggle to handle anomalies due to the spatial and temporal sparsity of abnormal events, where the redundant information always leads to suboptimal outcomes. To address these challenges, exploiting the representation and generalization capabilities of Vison Language Models (VLMs) and Large Language Models (LLMs), we propose VA-GPT, a novel MLLM designed for summarizing and localizing abnormal events in various videos. Our approach efficiently aligns effective tokens between visual encoders and LLMs through two key proposed modules: Spatial Effective Token Selection (SETS) and Temporal Effective Token Generation (TETG). These modules enable our model to effectively capture and analyze both spatial and temporal information associated with abnormal events, resulting in more accurate responses and interactions. Furthermore, we construct an instruction-following dataset specifically for fine-tuning video-anomaly-aware MLLMs, and introduce a cross-domain evaluation benchmark based on XD-Violence dataset. Our proposed method outperforms existing state-of-the-art methods on various benchmarks.
Similar Papers
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
CV and Pattern Recognition
Lets computers watch long videos faster.
Evaluation of Vision-LLMs in Surveillance Video
CV and Pattern Recognition
Helps computers spot unusual things in videos.
Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models
CV and Pattern Recognition
Teaches AI to better understand pictures and words together.