TA-Prompting: Enhancing Video Large Language Models for Dense Video Captioning via Temporal Anchors
By: Wei-Yuan Cheng , Kai-Po Chang , Chi-Pin Huang and more
Potential Business Impact:
Helps videos tell stories with exact moments.
Dense video captioning aims to interpret and describe all temporally localized events throughout an input video. Recent state-of-the-art methods leverage large language models (LLMs) to provide detailed moment descriptions for video data. However, existing VideoLLMs remain challenging in identifying precise event boundaries in untrimmed videos, causing the generated captions to be not properly grounded. In this paper, we propose TA-Prompting, which enhances VideoLLMs via Temporal Anchors that learn to precisely localize events and prompt the VideoLLMs to perform temporal-aware video event understanding. During inference, in order to properly determine the output caption sequence from an arbitrary number of events presented within a video, we introduce an event coherent sampling strategy to select event captions with sufficient coherence across temporal events and cross-modal similarity with the given video. Through extensive experiments on benchmark datasets, we show that our TA-Prompting is favorable against state-of-the-art VideoLLMs, yielding superior performance on dense video captioning and temporal understanding tasks including moment retrieval and temporalQA.
Similar Papers
GA2-CLIP: Generic Attribute Anchor for Efficient Prompt Tuningin Video-Language Models
CV and Pattern Recognition
Helps AI remember old lessons when learning new ones.
Explicit Temporal-Semantic Modeling for Dense Video Captioning via Context-Aware Cross-Modal Interaction
CV and Pattern Recognition
Helps computers describe what happens in videos.
Is Temporal Prompting All We Need For Limited Labeled Action Recognition?
CV and Pattern Recognition
Teaches computers to understand videos without lots of labels.