An Empirical Study on How Video-LLMs Answer Video Questions
By: Chenhui Gou , Ziyu Ma , Zicheng Duan and more
Potential Business Impact:
Explains how AI understands videos to make them faster.
Taking advantage of large-scale data and pretrained language models, Video Large Language Models (Video-LLMs) have shown strong capabilities in answering video questions. However, most existing efforts focus on improving performance, with limited attention to understanding their internal mechanisms. This paper aims to bridge this gap through a systematic empirical study. To interpret existing VideoLLMs, we adopt attention knockouts as our primary analytical tool and design three variants: Video Temporal Knockout, Video Spatial Knockout, and Language-to-Video Knockout. Then, we apply these three knockouts on different numbers of layers (window of layers). By carefully controlling the window of layers and types of knockouts, we provide two settings: a global setting and a fine-grained setting. Our study reveals three key findings: (1) Global setting indicates Video information extraction primarily occurs in early layers, forming a clear two-stage process -- lower layers focus on perceptual encoding, while higher layers handle abstract reasoning; (2) In the fine-grained setting, certain intermediate layers exert an outsized impact on video question answering, acting as critical outliers, whereas most other layers contribute minimally; (3) In both settings, we observe that spatial-temporal modeling relies more on language-guided retrieval than on intra- and inter-frame self-attention among video tokens, despite the latter's high computational cost. Finally, we demonstrate that these insights can be leveraged to reduce attention computation in Video-LLMs. To our knowledge, this is the first work to systematically uncover how Video-LLMs internally process and understand video content, offering interpretability and efficiency perspectives for future research.
Similar Papers
Map the Flow: Revealing Hidden Pathways of Information in VideoLLMs
CV and Pattern Recognition
Helps computers understand videos by watching how they learn.
How Important are Videos for Training Video LLMs?
CV and Pattern Recognition
Computers understand time from pictures, not just videos.
Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs
CV and Pattern Recognition
Lets computers understand long videos faster.