Minimal Clips, Maximum Salience: Long Video Summarization via Key Moment Extraction
By: Galann Pennec , Zhengyuan Liu , Nicholas Asher and more
Potential Business Impact:
Finds important movie parts for better summaries.
Vision-Language Models (VLMs) are able to process increasingly longer videos. Yet, important visual information is easily lost throughout the entire context and missed by VLMs. Also, it is important to design tools that enable cost-effective analysis of lengthy video content. In this paper, we propose a clip selection method that targets key video moments to be included in a multimodal summary. We divide the video into short clips and generate compact visual descriptions of each using a lightweight video captioning model. These are then passed to a large language model (LLM), which selects the K clips containing the most relevant visual information for a multimodal summary. We evaluate our approach on reference clips for the task, automatically derived from full human-annotated screenplays and summaries in the MovieSum dataset. We further show that these reference clips (less than 6% of the movie) are sufficient to build a complete multimodal summary of the movies in MovieSum. Using our clip selection method, we achieve a summarization performance close to that of these reference clips while capturing substantially more relevant video information than random clip selection. Importantly, we maintain low computational cost by relying on a lightweight captioning model.
Similar Papers
Video Summarization with Large Language Models
CV and Pattern Recognition
Makes video summaries understand stories better.
From Frames to Clips: Efficient Key Clip Selection for Long-Form Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
Summarization of Multimodal Presentations with Vision-Language Models: Study of the Effect of Modalities and Structure
CV and Pattern Recognition
Helps computers summarize videos and text together.