Score: 1

Audio-centric Video Understanding Benchmark without Text Shortcut

Published: March 25, 2025 | arXiv ID: 2503.19951v2

By: Yudong Yang , Jimin Zhuang , Guangzhi Sun and more

Potential Business Impact:

Helps computers understand videos by listening.

Business Areas:
Audiobooks Media and Entertainment, Music and Audio

Audio often serves as an auxiliary modality in video understanding tasks of audio-visual large language models (LLMs), merely assisting in the comprehension of visual information. However, a thorough understanding of videos significantly depends on auditory information, as audio offers critical context, emotional cues, and semantic meaning that visual data alone often lacks. This paper proposes an audio-centric video understanding benchmark (AVUT) to evaluate the video comprehension capabilities of multimodal LLMs with a particular focus on auditory information. AVUT introduces a suite of carefully designed audio-centric tasks, holistically testing the understanding of both audio content and audio-visual interactions in videos. Moreover, this work points out the text shortcut problem that largely exists in other benchmarks where the correct answer can be found from question text alone without needing videos. AVUT addresses this problem by proposing a answer permutation-based filtering mechanism. A thorough evaluation across a diverse range of open-source and proprietary multimodal LLMs is performed, followed by the analyses of deficiencies in audio-visual LLMs. Demos and data are available at https://github.com/lark-png/AVUT.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition