SD-MVSum: Script-Driven Multimodal Video Summarization Method and Datasets
By: Manolis Mylonas , Charalampia Zerva , Evlampios Apostolidis and more
Potential Business Impact:
Makes video summaries match your words better.
In this work, we extend a recent method for script-driven video summarization, originally considering just the visual content of the video, to take into account the relevance of the user-provided script also with the video's spoken content. In the proposed method, SD-MVSum, the dependence between each considered pair of data modalities, i.e., script-video and script-transcript, is modeled using a new weighted cross-modal attention mechanism. This explicitly exploits the semantic similarity between the paired modalities in order to promote the parts of the full-length video with the highest relevance to the user-provided script. Furthermore, we extend two large-scale datasets for video summarization (S-VideoXum, MrHiSum), to make them suitable for training and evaluation of script-driven multimodal video summarization methods. Experimental comparisons document the competitiveness of our SD-MVSum method against other SOTA approaches for script-driven and generic video summarization. Our new method and extended datasets are available at: https://github.com/IDT-ITI/SD-MVSum.
Similar Papers
SD-VSum: A Method and Dataset for Script-Driven Video Summarization
CV and Pattern Recognition
Makes videos shorter based on your story.
Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation
Computation and Language
Creates better TV show summaries from video.
Prompts to Summaries: Zero-Shot Language-Guided Video Summarization
CV and Pattern Recognition
Makes videos shorter by asking questions.