Score: 2

SD-MVSum: Script-Driven Multimodal Video Summarization Method and Datasets

Published: October 7, 2025 | arXiv ID: 2510.05652v1

By: Manolis Mylonas , Charalampia Zerva , Evlampios Apostolidis and more

Potential Business Impact:

Makes video summaries match your words better.

Business Areas:
Semantic Web Internet Services

In this work, we extend a recent method for script-driven video summarization, originally considering just the visual content of the video, to take into account the relevance of the user-provided script also with the video's spoken content. In the proposed method, SD-MVSum, the dependence between each considered pair of data modalities, i.e., script-video and script-transcript, is modeled using a new weighted cross-modal attention mechanism. This explicitly exploits the semantic similarity between the paired modalities in order to promote the parts of the full-length video with the highest relevance to the user-provided script. Furthermore, we extend two large-scale datasets for video summarization (S-VideoXum, MrHiSum), to make them suitable for training and evaluation of script-driven multimodal video summarization methods. Experimental comparisons document the competitiveness of our SD-MVSum method against other SOTA approaches for script-driven and generic video summarization. Our new method and extended datasets are available at: https://github.com/IDT-ITI/SD-MVSum.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition