Enhancing Video Large Language Models with Structured Multi-Video Collaborative Reasoning (early version)
By: Zhihao He , Tianyao He , Tieyuan Chen and more
Potential Business Impact:
Helps computers understand videos better by using many.
Despite the prosperity of the video language model, the current pursuit of comprehensive video reasoning is thwarted by the inherent spatio-temporal incompleteness within individual videos, resulting in hallucinations and inaccuracies. A promising solution is to augment the reasoning performance with multiple related videos. However, video tokens are numerous and contain redundant information, so directly feeding the relevant video data into a large language model to enhance responses could be counterproductive. To address this challenge, we propose a multi-video collaborative framework for video language models. For efficient and flexible video representation, we establish a Video Structuring Module to represent the video's knowledge as a spatio-temporal graph. Based on the structured video representation, we design the Graph Fusion Module to fuse the structured knowledge and valuable information from related videos into the augmented graph node tokens. Finally, we construct an elaborate multi-video structured prompt to integrate the graph, visual, and textual tokens as the input to the large language model. Extensive experiments substantiate the effectiveness of our framework, showcasing its potential as a promising avenue for advancing video language models.
Similar Papers
LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling
CV and Pattern Recognition
Helps computers understand long videos better.
UFVideo: Towards Unified Fine-Grained Video Cooperative Understanding with Large Language Models
CV and Pattern Recognition
Lets computers understand videos at different levels.
Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models
CV and Pattern Recognition
Helps computers understand videos by watching carefully.