Leveraging Auxiliary Information in Text-to-Video Retrieval: A Review
By: Adriano Fragomeni, Dima Damen, Michael Wray
Potential Business Impact:
Finds the right video from text descriptions.
Text-to-Video (T2V) retrieval aims to identify the most relevant item from a gallery of videos based on a user's text query. Traditional methods rely solely on aligning video and text modalities to compute the similarity and retrieve relevant items. However, recent advancements emphasise incorporating auxiliary information extracted from video and text modalities to improve retrieval performance and bridge the semantic gap between these modalities. Auxiliary information can include visual attributes, such as objects; temporal and spatial context; and textual descriptions, such as speech and rephrased captions. This survey comprehensively reviews 81 research papers on Text-to-Video retrieval that utilise such auxiliary information. It provides a detailed analysis of their methodologies; highlights state-of-the-art results on benchmark datasets; and discusses available datasets and their auxiliary information. Additionally, it proposes promising directions for future research, focusing on different ways to further enhance retrieval performance using this information.
Similar Papers
Are Synthetic Videos Useful? A Benchmark for Retrieval-Centric Evaluation of Synthetic Videos
CV and Pattern Recognition
Makes videos better for searching.
Queries Are Not Alone: Clustering Text Embeddings for Video Search
Information Retrieval
Find videos better by grouping similar search words.
Effectively obtaining acoustic, visual and textual data from videos
Multimedia
Creates new data for AI to learn from videos.