MF2Summ: Multimodal Fusion for Video Summarization with Temporal Alignment
By: Shuo wang, Jihao Zhang
Potential Business Impact:
Makes video summaries better by using sound and pictures.
The rapid proliferation of online video content necessitates effective video summarization techniques. Traditional methods, often relying on a single modality (typically visual), struggle to capture the full semantic richness of videos. This paper introduces MF2Summ, a novel video summarization model based on multimodal content understanding, integrating both visual and auditory information. MF2Summ employs a five-stage process: feature extraction, cross-modal attention interaction, feature fusion, segment prediction, and key shot selection. Visual features are extracted using a pre-trained GoogLeNet model, while auditory features are derived using SoundNet. The core of our fusion mechanism involves a cross-modal Transformer and an alignment-guided self-attention Transformer, designed to effectively model inter-modal dependencies and temporal correspondences. Segment importance, location, and center-ness are predicted, followed by key shot selection using Non-Maximum Suppression (NMS) and the Kernel Temporal Segmentation (KTS) algorithm. Experimental results on the SumMe and TVSum datasets demonstrate that MF2Summ achieves competitive performance, notably improving F1-scores by 1.9\% and 0.6\% respectively over the DSNet model, and performing favorably against other state-of-the-art methods.
Similar Papers
CFSum: A Transformer-Based Multi-Modal Video Summarization Framework With Coarse-Fine Fusion
CV and Pattern Recognition
Makes video summaries using sound, text, and pictures.
Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation
Computation and Language
Creates better TV show summaries from video.
SD-MVSum: Script-Driven Multimodal Video Summarization Method and Datasets
CV and Pattern Recognition
Makes video summaries match your words better.