TIME: Temporal-Sensitive Multi-Dimensional Instruction Tuning and Robust Benchmarking for Video-LLMs
By: Yunxiao Wang , Meng Liu , Wenqi Liu and more
Potential Business Impact:
Helps computers understand video time better.
Video large language models have achieved remarkable performance in tasks such as video question answering, however, their temporal understanding remains suboptimal. To address this limitation, we curate a dedicated instruction fine-tuning dataset that focuses on enhancing temporal comprehension across five key dimensions. In order to reduce reliance on costly temporal annotations, we introduce a multi-task prompt fine-tuning approach that seamlessly integrates temporal-sensitive tasks into existing instruction datasets without requiring additional annotations. Furthermore, we develop a novel benchmark for temporal-sensitive video understanding that not only fills the gaps in dimension coverage left by existing benchmarks but also rigorously filters out potential shortcuts, ensuring a more accurate evaluation. Extensive experimental results demonstrate that our approach significantly enhances the temporal understanding of video-LLMs while avoiding reliance on shortcuts.
Similar Papers
VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding
CV and Pattern Recognition
Helps computers understand when things happen in videos.
VideoLLM Benchmarks and Evaluation: A Survey
CV and Pattern Recognition
Helps computers understand videos better.
A Study into Investigating Temporal Robustness of LLMs
Computation and Language
Helps computers understand time better for answers.