Score: 0

Do Video Language Models Really Know Where to Look? Diagnosing Attention Failures in Video Language Models

Published: September 1, 2025 | arXiv ID: 2509.01167v1

By: Hyunjong Ok, Jaeho Lee

Potential Business Impact:

Helps computers understand videos better by picking key moments.

Business Areas:
Image Recognition Data and Analytics, Software

Recent advances in multimodal large language models (MLLMs) have led to much progress in video understanding tasks. To avoid the heavy computational cost of processing all frames, these models typically rely on keyframe sampling methods guided by vision-language encoders (\textit{e.g.,} SigLIP). However, it remains unclear whether such encoders can truly identify the most informative frames. In this work, we provide several empirical pieces of evidence revealing that popular vision encoders critically suffer from their limited capability to identify where the MLLM should look inside the video to handle the given textual query appropriately. Our findings suggest that the development of better keyframe identification techniques may be necessary for efficient video MLLMs.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition