SD-VSum: A Method and Dataset for Script-Driven Video Summarization
By: Manolis Mylonas, Evlampios Apostolidis, Vasileios Mezaris
Potential Business Impact:
Makes videos shorter based on your story.
In this work, we introduce the task of script-driven video summarization, which aims to produce a summary of the full-length video by selecting the parts that are most relevant to a user-provided script outlining the visual content of the desired summary. Following, we extend a recently-introduced large-scale dataset for generic video summarization (VideoXum) by producing natural language descriptions of the different human-annotated summaries that are available per video. In this way we make it compatible with the introduced task, since the available triplets of ``video, summary and summary description'' can be used for training a method that is able to produce different summaries for a given video, driven by the provided script about the content of each summary. Finally, we develop a new network architecture for script-driven video summarization (SD-VSum), that employs a cross-modal attention mechanism for aligning and fusing information from the visual and text modalities. Our experimental evaluations demonstrate the advanced performance of SD-VSum against SOTA approaches for query-driven and generic (unimodal and multimodal) summarization from the literature, and document its capacity to produce video summaries that are adapted to each user's needs about their content.
Similar Papers
SD-MVSum: Script-Driven Multimodal Video Summarization Method and Datasets
CV and Pattern Recognition
Makes video summaries match your words better.
Prompts to Summaries: Zero-Shot Language-Guided Video Summarization
CV and Pattern Recognition
Makes videos shorter by asking questions.
Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation
Computation and Language
Creates better TV show summaries from video.