An empirical study of the effect of video encoders on Temporal Video Grounding
By: Ignacio M. De la Jara , Cristian Rodriguez-Opazo , Edison Marrese-Taylor and more
Potential Business Impact:
Helps computers find video clips from descriptions.
Temporal video grounding is a fundamental task in computer vision, aiming to localize a natural language query in a long, untrimmed video. It has a key role in the scientific community, in part due to the large amount of video generated every day. Although we find extensive work in this task, we note that research remains focused on a small selection of video representations, which may lead to architectural overfitting in the long run. To address this issue, we propose an empirical study to investigate the impact of different video features on a classical architecture. We extract features for three well-known benchmarks, Charades-STA, ActivityNet-Captions and YouCookII, using video encoders based on CNNs, temporal reasoning and transformers. Our results show significant differences in the performance of our model by simply changing the video encoder, while also revealing clear patterns and errors derived from the use of certain features, ultimately indicating potential feature complementarity.
Similar Papers
Enrich and Detect: Video Temporal Grounding with Multimodal LLMs
CV and Pattern Recognition
Finds exact moments in videos from descriptions.
TAR-TVG: Enhancing VLMs with Timestamp Anchor-Constrained Reasoning for Temporal Video Grounding
CV and Pattern Recognition
Finds exact moments in videos using words.
Causality Matters: How Temporal Information Emerges in Video Language Models
CV and Pattern Recognition
Lets computers understand video time without special time codes.