FiLA-Video: Spatio-Temporal Compression for Fine-Grained Long Video Understanding
By: Yanan Guo , Wenhui Dong , Jun Song and more
Potential Business Impact:
Helps computers understand long videos better.
Recent advancements in video understanding within visual large language models (VLLMs) have led to notable progress. However, the complexity of video data and contextual processing limitations still hinder long-video comprehension. A common approach is video feature compression to reduce token input to large language models, yet many methods either fail to prioritize essential features, leading to redundant inter-frame information, or introduce computationally expensive modules.To address these issues, we propose FiLA(Fine-grained Vision Language Model)-Video, a novel framework that leverages a lightweight dynamic-weight multi-frame fusion strategy, which adaptively integrates multiple frames into a single representation while preserving key video information and reducing computational costs. To enhance frame selection for fusion, we introduce a keyframe selection strategy, effectively identifying informative frames from a larger pool for improved summarization. Additionally, we present a simple yet effective long-video training data generation strategy, boosting model performance without extensive manual annotation. Experimental results demonstrate that FiLA-Video achieves superior efficiency and accuracy in long-video comprehension compared to existing methods.
Similar Papers
KFFocus: Highlighting Keyframes for Enhanced Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
LVC: A Lightweight Compression Framework for Enhancing VLMs in Long Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
CV and Pattern Recognition
Lets computers understand long videos better.