Score: 1

Towards Fine-Grained Video Question Answering

Published: March 10, 2025 | arXiv ID: 2503.06820v1

By: Wei Dai , Alan Luo , Zane Durante and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Helps computers understand videos by watching objects and actions.

Business Areas:
Motion Capture Media and Entertainment, Video

In the rapidly evolving domain of video understanding, Video Question Answering (VideoQA) remains a focal point. However, existing datasets exhibit gaps in temporal and spatial granularity, which consequently limits the capabilities of existing VideoQA methods. This paper introduces the Multi-Object Multi-Actor Question Answering (MOMA-QA) dataset, which is designed to address these shortcomings by emphasizing temporal localization, spatial relationship reasoning, and entity-centric queries. With ground truth scene graphs and temporal interval annotations, MOMA-QA is ideal for developing models for fine-grained video understanding. Furthermore, we present a novel video-language model, SGVLM, which incorporates a scene graph predictor, an efficient frame retriever, and a pre-trained large language model for temporal localization and fine-grained relationship understanding. Evaluations on MOMA-QA and other public datasets demonstrate the superior performance of our model, setting new benchmarks for VideoQA.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition