Explicit Temporal-Semantic Modeling for Dense Video Captioning via Context-Aware Cross-Modal Interaction
By: Mingda Jia , Weiliang Meng , Zenghuang Fu and more
Potential Business Impact:
Helps computers describe what happens in videos.
Dense video captioning jointly localizes and captions salient events in untrimmed videos. Recent methods primarily focus on leveraging additional prior knowledge and advanced multi-task architectures to achieve competitive performance. However, these pipelines rely on implicit modeling that uses frame-level or fragmented video features, failing to capture the temporal coherence across event sequences and comprehensive semantics within visual contexts. To address this, we propose an explicit temporal-semantic modeling framework called Context-Aware Cross-Modal Interaction (CACMI), which leverages both latent temporal characteristics within videos and linguistic semantics from text corpus. Specifically, our model consists of two core components: Cross-modal Frame Aggregation aggregates relevant frames to extract temporally coherent, event-aligned textual features through cross-modal retrieval; and Context-aware Feature Enhancement utilizes query-guided attention to integrate visual dynamics with pseudo-event semantics. Extensive experiments on the ActivityNet Captions and YouCook2 datasets demonstrate that CACMI achieves the state-of-the-art performance on dense video captioning task.
Similar Papers
Exploring The Missing Semantics In Event Modality
CV and Pattern Recognition
Helps cameras see objects even in fast motion.
From Captions to Keyframes: Efficient Video Summarization via Caption- and Context-Aware Frame Scoring
CV and Pattern Recognition
Finds important video parts for understanding.
Dense Motion Captioning
CV and Pattern Recognition
Helps computers understand and describe human movements.