Listening without Looking: Modality Bias in Audio-Visual Captioning
By: Yuchi Ishikawa , Toranosuke Manabe , Tatsuya Komatsu and more
Potential Business Impact:
Helps computers describe videos using sound and sight.
Audio-visual captioning aims to generate holistic scene descriptions by jointly modeling sound and vision. While recent methods have improved performance through sophisticated modality fusion, it remains unclear to what extent the two modalities are complementary in current audio-visual captioning models and how robust these models are when one modality is degraded. We address these questions by conducting systematic modality robustness tests on LAVCap, a state-of-the-art audio-visual captioning model, in which we selectively suppress or corrupt the audio or visual streams to quantify sensitivity and complementarity. The analysis reveals a pronounced bias toward the audio stream in LAVCap. To evaluate how balanced audio-visual captioning models are in their use of both modalities, we augment AudioCaps with textual annotations that jointly describe the audio and visual streams, yielding the AudioVisualCaps dataset. In our experiments, we report LAVCap baseline results on AudioVisualCaps. We also evaluate the model under modality robustness tests on AudioVisualCaps and the results indicate that LAVCap trained on AudioVisualCaps exhibits less modality bias than when trained on AudioCaps.
Similar Papers
Aligned Better, Listen Better for Audio-Visual Large Language Models
CV and Pattern Recognition
Helps computers understand videos by listening.
Can Sound Replace Vision in LLaVA With Token Substitution?
Multimedia
Makes computers understand sounds and pictures better.
Reading to Listen at the Cocktail Party: Multi-Modal Speech Separation
Audio and Speech Processing
Cleans up noisy talking using sight and sound.