From Captions to Keyframes: Efficient Video Summarization via Caption- and Context-Aware Frame Scoring
By: Shih-Yao Lin, Sibendu Paul, Caren Chen
Potential Business Impact:
Finds important video parts for understanding.
Efficient video-language understanding requires selecting a small set of frames that retain semantic and contextual information from long videos. We propose KeyScore, a multimodal frame scoring framework that jointly leverages captions and visual context to estimate frame-level importance. By combining semantic similarity, temporal diversity, and contextual drop impact, KeyScore identifies the most informative frames for downstream tasks such as retrieval, captioning, and video-language reasoning. To complement KeyScore, we introduce STACFP (Spatio-Temporal Adaptive Clustering for Frame Proposals), which generates compact and diverse frame candidates for long-form videos. Together, these modules achieve up to 99\% frame reduction compared to full-frame inference and substantially outperform standard 8-frame encoders on MSRVTT, MSVD, and DiDeMo. Our results demonstrate that emphasizing multimodal alignment between visual and textual signals enables scalable, efficient, and caption-grounded video understanding -- without explicit video summarization.
Similar Papers
Explicit Temporal-Semantic Modeling for Dense Video Captioning via Context-Aware Cross-Modal Interaction
CV and Pattern Recognition
Helps computers describe what happens in videos.
Context-Aware Pseudo-Label Scoring for Zero-Shot Video Summarization
CV and Pattern Recognition
Makes long videos short without watching them.
Video Summarization with Large Language Models
CV and Pattern Recognition
Makes video summaries understand stories better.