TimeExpert: An Expert-Guided Video LLM for Video Temporal Grounding
By: Zuhao Yang , Yingchen Yu , Yunqing Zhao and more
Potential Business Impact:
Finds video moments described by words.
Video Temporal Grounding (VTG) aims to precisely identify video event segments in response to textual queries. The outputs of VTG tasks manifest as sequences of events, each defined by precise timestamps, saliency scores, and textual descriptions. Despite recent advances, a fundamental limitation persists in existing Video Large Language Models (Video-LLMs): they process all task tokens through identical and static pathways, failing to recognize that temporal localization, saliency assessment, and textual generation represent fundamentally distinct tasks requiring specialized processing. To address this, we introduce TimeExpert, a Mixture-of-Experts (MoE)-based Video-LLM that effectively decomposes VTG tasks by dynamically routing task-specific tokens (e.g., timestamps, saliency scores) to specialized experts, with increased computational efficiency. Our design choices enable precise handling of each subtask, leading to improved event modeling across diverse VTG applications. Extensive experiments demonstrate that TimeExpert consistently achieves state-of-the-art performance on various VTG tasks such as Dense Video Captioning, Moment Retrieval, and Video Highlight Detection.
Similar Papers
Enrich and Detect: Video Temporal Grounding with Multimodal LLMs
CV and Pattern Recognition
Finds exact moments in videos from descriptions.
VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding
CV and Pattern Recognition
Helps computers understand when things happen in videos.
A Survey on Video Temporal Grounding with Multimodal Large Language Model
CV and Pattern Recognition
Helps computers find specific moments in videos.