UWBench: A Comprehensive Vision-Language Benchmark for Underwater Understanding
By: Da Zhang , Chenggang Rong , Bingyu Li and more
Potential Business Impact:
Helps computers understand what's underwater.
Large vision-language models (VLMs) have achieved remarkable success in natural scene understanding, yet their application to underwater environments remains largely unexplored. Underwater imagery presents unique challenges including severe light attenuation, color distortion, and suspended particle scattering, while requiring specialized knowledge of marine ecosystems and organism taxonomy. To bridge this gap, we introduce UWBench, a comprehensive benchmark specifically designed for underwater vision-language understanding. UWBench comprises 15,003 high-resolution underwater images captured across diverse aquatic environments, encompassing oceans, coral reefs, and deep-sea habitats. Each image is enriched with human-verified annotations including 15,281 object referring expressions that precisely describe marine organisms and underwater structures, and 124,983 question-answer pairs covering diverse reasoning capabilities from object recognition to ecological relationship understanding. The dataset captures rich variations in visibility, lighting conditions, and water turbidity, providing a realistic testbed for model evaluation. Based on UWBench, we establish three comprehensive benchmarks: detailed image captioning for generating ecologically informed scene descriptions, visual grounding for precise localization of marine organisms, and visual question answering for multimodal reasoning about underwater environments. Extensive experiments on state-of-the-art VLMs demonstrate that underwater understanding remains challenging, with substantial room for improvement. Our benchmark provides essential resources for advancing vision-language research in underwater contexts and supporting applications in marine science, ecological monitoring, and autonomous underwater exploration. Our code and benchmark will be available.
Similar Papers
Knowledge-Augmented Vision Language Models for Underwater Bioacoustic Spectrogram Analysis
CV and Pattern Recognition
Lets computers understand whale songs without training.
TDBench: Benchmarking Vision-Language Models in Understanding Top-Down Images
Machine Learning (CS)
Helps computers understand bird's-eye view images better.
IndicVisionBench: Benchmarking Cultural and Multilingual Understanding in VLMs
CV and Pattern Recognition
Tests AI on Indian languages and culture.