VidEvent: A Large Dataset for Understanding Dynamic Evolution of Events in Videos
By: Baoyu Liang , Qile Su , Shoutai Zhu and more
Potential Business Impact:
Helps computers understand stories in videos.
Despite the significant impact of visual events on human cognition, understanding events in videos remains a challenging task for AI due to their complex structures, semantic hierarchies, and dynamic evolution. To address this, we propose the task of video event understanding that extracts event scripts and makes predictions with these scripts from videos. To support this task, we introduce VidEvent, a large-scale dataset containing over 23,000 well-labeled events, featuring detailed event structures, broad hierarchies, and logical relations extracted from movie recap videos. The dataset was created through a meticulous annotation process, ensuring high-quality and reliable event data. We also provide comprehensive baseline models offering detailed descriptions of their architecture and performance metrics. These models serve as benchmarks for future research, facilitating comparisons and improvements. Our analysis of VidEvent and the baseline models highlights the dataset's potential to advance video event understanding and encourages the exploration of innovative algorithms and models. The dataset and related resources are publicly available at www.videvent.top.
Similar Papers
Event Stream-based Visual Object Tracking: HDETrack V2 and A High-Definition Benchmark
CV and Pattern Recognition
Tracks moving things in videos better, even in low light.
A Video-grounded Dialogue Dataset and Metric for Event-driven Activities
CV and Pattern Recognition
Helps computers understand videos and answer questions.
EventSTU: Event-Guided Efficient Spatio-Temporal Understanding for Video Large Language Models
CV and Pattern Recognition
Makes video AI faster by skipping boring parts.