AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations
By: Quang Trung Truong , Wong Yuk Kwan , Duc Thanh Nguyen and more
Potential Business Impact:
Creates realistic underwater videos for training robots.
Underwater video analysis, hampered by the dynamic marine environment and camera motion, remains a challenging task in computer vision. Existing training-free video generation techniques, learning motion dynamics on the frame-by-frame basis, often produce poor results with noticeable motion interruptions and misaligments. To address these issues, we propose AUTV, a framework for synthesizing marine video data with pixel-wise annotations. We demonstrate the effectiveness of this framework by constructing two video datasets, namely UTV, a real-world dataset comprising 2,000 video-text pairs, and SUTV, a synthetic video dataset including 10,000 videos with segmentation masks for marine objects. UTV provides diverse underwater videos with comprehensive annotations including appearance, texture, camera intrinsics, lighting, and animal behavior. SUTV can be used to improve underwater downstream tasks, which are demonstrated in video inpainting and video object segmentation.
Similar Papers
Closer to Ground Truth: Realistic Shape and Appearance Labeled Data Generation for Unsupervised Underwater Image Segmentation
CV and Pattern Recognition
Helps computers count fish in murky water.
Uncovering Anomalous Events for Marine Environmental Monitoring via Visual Anomaly Detection
CV and Pattern Recognition
Finds rare sea creatures in hours of video.
Knowledge Distillation for Underwater Feature Extraction and Matching via GAN-synthesized Images
CV and Pattern Recognition
Helps underwater robots see and map better.