A Survey on Video Temporal Grounding with Multimodal Large Language Model
By: Jianlong Wu , Wei Liu , Ye Liu and more
Potential Business Impact:
Helps computers find specific moments in videos.
The recent advancement in video temporal grounding (VTG) has significantly enhanced fine-grained video understanding, primarily driven by multimodal large language models (MLLMs). With superior multimodal comprehension and reasoning abilities, VTG approaches based on MLLMs (VTG-MLLMs) are gradually surpassing traditional fine-tuned methods. They not only achieve competitive performance but also excel in generalization across zero-shot, multi-task, and multi-domain settings. Despite extensive surveys on general video-language understanding, comprehensive reviews specifically addressing VTG-MLLMs remain scarce. To fill this gap, this survey systematically examines current research on VTG-MLLMs through a three-dimensional taxonomy: 1) the functional roles of MLLMs, highlighting their architectural significance; 2) training paradigms, analyzing strategies for temporal reasoning and task adaptation; and 3) video feature processing techniques, which determine spatiotemporal representation effectiveness. We further discuss benchmark datasets, evaluation protocols, and summarize empirical findings. Finally, we identify existing limitations and propose promising research directions. For additional resources and details, readers are encouraged to visit our repository at https://github.com/ki-lw/Awesome-MLLMs-for-Video-Temporal-Grounding.
Similar Papers
Enrich and Detect: Video Temporal Grounding with Multimodal LLMs
CV and Pattern Recognition
Finds exact moments in videos from descriptions.
ExpVG: Investigating the Design Space of Visual Grounding in Multimodal Large Language Model
CV and Pattern Recognition
Helps computers understand what pictures show.
Unleashing the Potential of Multimodal LLMs for Zero-Shot Spatio-Temporal Video Grounding
CV and Pattern Recognition
Finds objects in videos using text descriptions.