Do Video Language Models Really Know Where to Look? Diagnosing Attention Failures in Video Language Models
By: Hyunjong Ok, Jaeho Lee
Potential Business Impact:
Helps computers understand videos better by picking key moments.
Recent advances in multimodal large language models (MLLMs) have led to much progress in video understanding tasks. To avoid the heavy computational cost of processing all frames, these models typically rely on keyframe sampling methods guided by vision-language encoders (\textit{e.g.,} SigLIP). However, it remains unclear whether such encoders can truly identify the most informative frames. In this work, we provide several empirical pieces of evidence revealing that popular vision encoders critically suffer from their limited capability to identify where the MLLM should look inside the video to handle the given textual query appropriately. Our findings suggest that the development of better keyframe identification techniques may be necessary for efficient video MLLMs.
Similar Papers
An Empirical Study on How Video-LLMs Answer Video Questions
CV and Pattern Recognition
Explains how AI understands videos to make them faster.
An Empirical Study for Representations of Videos in Video Question Answering via MLLMs
Information Retrieval
Helps computers understand videos better and faster.
Failures to Surface Harmful Contents in Video Large Language Models
Multimedia
AI watching videos misses bad stuff.