Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model
By: Yong Ren , Chenxing Li , Le Xu and more
Potential Business Impact:
Lets computers guess sounds from silent videos.
Humans can intuitively infer sounds from silent videos, but whether multimodal large language models can perform modal-mismatch reasoning without accessing target modalities remains relatively unexplored. Current text-assisted-video-to-audio (VT2A) methods excel in video foley tasks but struggle to acquire audio descriptions during inference. We introduce the task of Reasoning Audio Descriptions from Silent Videos (SVAD) to address this challenge and investigate vision-language models' (VLMs) capabilities on this task. To further enhance the VLMs' reasoning capacity for the SVAD task, we construct a CoT-AudioCaps dataset and propose a Chain-of-Thought-based supervised fine-tuning strategy. Experiments on SVAD and subsequent VT2A tasks demonstrate our method's effectiveness in two key aspects: significantly improving VLMs' modal-mismatch reasoning for SVAD and effectively addressing the challenge of acquiring audio descriptions during VT2A inference.
Similar Papers
SightSound-R1: Cross-Modal Reasoning Distillation from Vision to Audio Language Models
Sound
Teaches computers to understand sounds better.
Aligned Better, Listen Better for Audio-Visual Large Language Models
CV and Pattern Recognition
Helps computers understand videos by listening.
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
CV and Pattern Recognition
Helps computers understand videos by watching and listening.