A Matter of Time: Revealing the Structure of Time in Vision-Language Models
By: Nidham Tekaya, Manuela Waldner, Matthias Zeppelzauer
Potential Business Impact:
Lets computers understand when pictures were taken.
Large-scale vision-language models (VLMs) such as CLIP have gained popularity for their generalizable and expressive multimodal representations. By leveraging large-scale training data with diverse textual metadata, VLMs acquire open-vocabulary capabilities, solving tasks beyond their training scope. This paper investigates the temporal awareness of VLMs, assessing their ability to position visual content in time. We introduce TIME10k, a benchmark dataset of over 10,000 images with temporal ground truth, and evaluate the time-awareness of 37 VLMs by a novel methodology. Our investigation reveals that temporal information is structured along a low-dimensional, non-linear manifold in the VLM embedding space. Based on this insight, we propose methods to derive an explicit ``timeline'' representation from the embedding space. These representations model time and its chronological progression and thereby facilitate temporal reasoning tasks. Our timeline approaches achieve competitive to superior accuracy compared to a prompt-based baseline while being computationally efficient. All code and data are available at https://tekayanidham.github.io/timeline-page/.
Similar Papers
TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs
CV and Pattern Recognition
Helps computers find specific moments in videos.
STER-VLM: Spatio-Temporal With Enhanced Reference Vision-Language Models
CV and Pattern Recognition
Helps self-driving cars understand traffic better.
VLM4D: Towards Spatiotemporal Awareness in Vision Language Models
CV and Pattern Recognition
Tests AI's grasp of video movements and fixes gaps