Audio-centric Video Understanding Benchmark without Text Shortcut
By: Yudong Yang , Jimin Zhuang , Guangzhi Sun and more
Potential Business Impact:
Helps computers understand videos by listening.
Audio often serves as an auxiliary modality in video understanding tasks of audio-visual large language models (LLMs), merely assisting in the comprehension of visual information. However, a thorough understanding of videos significantly depends on auditory information, as audio offers critical context, emotional cues, and semantic meaning that visual data alone often lacks. This paper proposes an audio-centric video understanding benchmark (AVUT) to evaluate the video comprehension capabilities of multimodal LLMs with a particular focus on auditory information. AVUT introduces a suite of carefully designed audio-centric tasks, holistically testing the understanding of both audio content and audio-visual interactions in videos. Moreover, this work points out the text shortcut problem that largely exists in other benchmarks where the correct answer can be found from question text alone without needing videos. AVUT addresses this problem by proposing a answer permutation-based filtering mechanism. A thorough evaluation across a diverse range of open-source and proprietary multimodal LLMs is performed, followed by the analyses of deficiencies in audio-visual LLMs. Demos and data are available at https://github.com/lark-png/AVUT.
Similar Papers
Aligned Better, Listen Better for Audio-Visual Large Language Models
CV and Pattern Recognition
Helps computers understand videos by listening.
JointAVBench: A Benchmark for Joint Audio-Visual Reasoning Evaluation
Multimedia
Tests AI that understands videos and sounds together.
Does Audio Matter for Modern Video-LLMs and Their Benchmarks?
CV and Pattern Recognition
Makes videos understandable with sound, not just pictures.