ViSIL: Unified Evaluation of Information Loss in Multimodal Video Captioning
By: Po-han Li , Shenghui Chen , Ufuk Topcu and more
Multimodal video captioning condenses dense footage into a structured format of keyframes and natural language. By creating a cohesive multimodal summary, this approach anchors generative AI in rich semantic evidence and serves as a lightweight proxy for high-efficiency retrieval. However, traditional metrics like BLEU or ROUGE fail to quantify information coverage across disparate modalities, such as comparing a paragraph of text to a sequence of keyframes. To address this, we propose the Video Summary Information Loss (ViSIL) score, an information-theoretic framework that quantifies the video information not captured by a summary via vision-language model (VLM) inference. By measuring the information loss, ViSIL is a unified metric that enables direct comparison across multimodal summary formats despite their structural discrepancies. Our results demonstrate that ViSIL scores show a statistically significant correlation with both human and VLM performance on Video Question Answering (VQA) tasks. ViSIL also enables summary selection to optimize the trade-off between information loss and processing speed, establishing a Pareto-optimal frontier that outperforms text summaries by $7\%$ in VQA accuracy without increasing processing load.
Similar Papers
VIBE: Annotation-Free Video-to-Text Information Bottleneck Evaluation for TL;DR
CV and Pattern Recognition
Helps people quickly find important video details.
VSI: Visual Subtitle Integration for Keyframe Selection to enhance Long Video Understanding
CV and Pattern Recognition
Finds important video moments using words and subtitles.
Enhancing Multimodal Recommendations with Vision-Language Models and Information-Aware Fusion
Information Retrieval
Improves online shopping suggestions using pictures and words.