MSC: A Marine Wildlife Video Dataset with Grounded Segmentation and Clip-Level Captioning
By: Quang-Trung Truong , Yuk-Kwan Wong , Vo Hoang Kim Tuyen Dang and more
Potential Business Impact:
Helps computers understand ocean videos and marine life.
Marine videos present significant challenges for video understanding due to the dynamics of marine objects and the surrounding environment, camera motion, and the complexity of underwater scenes. Existing video captioning datasets, typically focused on generic or human-centric domains, often fail to generalize to the complexities of the marine environment and gain insights about marine life. To address these limitations, we propose a two-stage marine object-oriented video captioning pipeline. We introduce a comprehensive video understanding benchmark that leverages the triplets of video, text, and segmentation masks to facilitate visual grounding and captioning, leading to improved marine video understanding and analysis, and marine video generation. Additionally, we highlight the effectiveness of video splitting in order to detect salient object transitions in scene changes, which significantly enrich the semantics of captioning content. Our dataset and code have been released at https://msc.hkustvgd.com.
Similar Papers
MSC: A Marine Wildlife Video Dataset with Grounded Segmentation and Clip-Level Captioning
CV and Pattern Recognition
Helps computers understand and describe underwater videos.
MSC: A Marine Wildlife Video Dataset with Grounded Segmentation and Clip-Level Captioning
CV and Pattern Recognition
Helps computers understand and describe ocean videos.
DeepSea MOT: A benchmark dataset for multi-object tracking on deep-sea video
CV and Pattern Recognition
Helps robots see better in the deep ocean.