Bridging Vision Language Models and Symbolic Grounding for Video Question Answering
By: Haodi Ma, Vyom Pathak, Daisy Zhe Wang
Potential Business Impact:
Helps computers understand videos better by seeing relationships.
Video Question Answering (VQA) requires models to reason over spatial, temporal, and causal cues in videos. Recent vision language models (VLMs) achieve strong results but often rely on shallow correlations, leading to weak temporal grounding and limited interpretability. We study symbolic scene graphs (SGs) as intermediate grounding signals for VQA. SGs provide structured object-relation representations that complement VLMs holistic reasoning. We introduce SG-VLM, a modular framework that integrates frozen VLMs with scene graph grounding via prompting and visual localization. Across three benchmarks (NExT-QA, iVQA, ActivityNet-QA) and multiple VLMs (QwenVL, InternVL), SG-VLM improves causal and temporal reasoning and outperforms prior baselines, though gains over strong VLMs are limited. These findings highlight both the promise and current limitations of symbolic grounding, and offer guidance for future hybrid VLM-symbolic approaches in video understanding.
Similar Papers
NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning
CV and Pattern Recognition
Answers questions about long videos better.
A Survey on Video Temporal Grounding with Multimodal Large Language Model
CV and Pattern Recognition
Helps computers find specific moments in videos.
CounterVQA: Evaluating and Improving Counterfactual Reasoning in Vision-Language Models for Video Understanding
CV and Pattern Recognition
Helps computers imagine "what if" in videos.