CFSum: A Transformer-Based Multi-Modal Video Summarization Framework With Coarse-Fine Fusion
By: Yaowei Guo , Jiazheng Xing , Xiaojun Hou and more
Potential Business Impact:
Makes video summaries using sound, text, and pictures.
Video summarization, by selecting the most informative and/or user-relevant parts of original videos to create concise summary videos, has high research value and consumer demand in today's video proliferation era. Multi-modal video summarization that accomodates user input has become a research hotspot. However, current multi-modal video summarization methods suffer from two limitations. First, existing methods inadequately fuse information from different modalities and cannot effectively utilize modality-unique features. Second, most multi-modal methods focus on video and text modalities, neglecting the audio modality, despite the fact that audio information can be very useful in certain types of videos. In this paper we propose CFSum, a transformer-based multi-modal video summarization framework with coarse-fine fusion. CFSum exploits video, text, and audio modal features as input, and incorporates a two-stage transformer-based feature fusion framework to fully utilize modality-unique information. In the first stage, multi-modal features are fused simultaneously to perform initial coarse-grained feature fusion, then, in the second stage, video and audio features are explicitly attended with the text representation yielding more fine-grained information interaction. The CFSum architecture gives equal importance to each modality, ensuring that each modal feature interacts deeply with the other modalities. Our extensive comparative experiments against prior methods and ablation studies on various datasets confirm the effectiveness and superiority of CFSum.
Similar Papers
MF2Summ: Multimodal Fusion for Video Summarization with Temporal Alignment
CV and Pattern Recognition
Makes video summaries better by using sound and pictures.
Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation
Computation and Language
Creates better TV show summaries from video.
FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual Fusion
Sound
Makes computers describe sounds with more detail.