Harnessing Object Grounding for Time-Sensitive Video Understanding
By: Tz-Ying Wu, Sharath Nittur Sridhar, Subarna Tripathi
Potential Business Impact:
Helps AI understand videos by seeing objects.
We propose to improve the time-sensitive video understanding (TSV) capability of video large language models (Video-LLMs) with grounded objects (GO). We hypothesize that TSV tasks can benefit from GO within frames, which is supported by our preliminary experiments on LITA, a state-of-the-art Video-LLM for reasoning temporal localization. While augmenting prompts with textual description of these object annotations improves the performance of LITA, it also introduces extra token length and susceptibility to the noise in object level information. To address this, we propose GO-Tokenizer, a lightweight add-on module for Video-LLMs leveraging off-the-shelf object detectors to encode compact object information on the fly. Experimental results demonstrate that pretraining with GO-Tokenizer outperforms the vanilla Video-LLM and its counterpart utilizing textual description of objects in the prompt. The gain generalizes across different models, datasets and video understanding tasks such as reasoning temporal localization and dense captioning.
Similar Papers
Enrich and Detect: Video Temporal Grounding with Multimodal LLMs
CV and Pattern Recognition
Finds exact moments in videos from descriptions.
Thinking With Bounding Boxes: Enhancing Spatio-Temporal Video Grounding via Reinforcement Fine-Tuning
CV and Pattern Recognition
Helps computers find objects in videos using words.
ToG-Bench: Task-Oriented Spatio-Temporal Grounding in Egocentric Videos
CV and Pattern Recognition
Helps robots understand what to do in a room.