Score: 0

Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model

Published: May 19, 2025 | arXiv ID: 2505.13062v3

By: Yong Ren , Chenxing Li , Le Xu and more

Potential Business Impact:

Lets computers guess sounds from silent videos.

Business Areas:
Speech Recognition Data and Analytics, Software

Humans can intuitively infer sounds from silent videos, but whether multimodal large language models can perform modal-mismatch reasoning without accessing target modalities remains relatively unexplored. Current text-assisted-video-to-audio (VT2A) methods excel in video foley tasks but struggle to acquire audio descriptions during inference. We introduce the task of Reasoning Audio Descriptions from Silent Videos (SVAD) to address this challenge and investigate vision-language models' (VLMs) capabilities on this task. To further enhance the VLMs' reasoning capacity for the SVAD task, we construct a CoT-AudioCaps dataset and propose a Chain-of-Thought-based supervised fine-tuning strategy. Experiments on SVAD and subsequent VT2A tasks demonstrate our method's effectiveness in two key aspects: significantly improving VLMs' modal-mismatch reasoning for SVAD and effectively addressing the challenge of acquiring audio descriptions during VT2A inference.

Page Count
5 pages

Category
Computer Science:
Multimedia